US20110096997A1 - Graphical image authentication - Google Patents

Graphical image authentication Download PDF

Info

Publication number
US20110096997A1
US20110096997A1 US12/669,008 US66900808A US2011096997A1 US 20110096997 A1 US20110096997 A1 US 20110096997A1 US 66900808 A US66900808 A US 66900808A US 2011096997 A1 US2011096997 A1 US 2011096997A1
Authority
US
United States
Prior art keywords
image
information
items
selection
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/669,008
Inventor
Tobias Marciszko
David De Leon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Mobile Communications AB
Original Assignee
Sony Ericsson Mobile Communications AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Ericsson Mobile Communications AB filed Critical Sony Ericsson Mobile Communications AB
Priority to US12/669,008 priority Critical patent/US20110096997A1/en
Assigned to SONY ERICSSON MOBILE COMMUNICATIONS AB reassignment SONY ERICSSON MOBILE COMMUNICATIONS AB ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MARCISZKO, TOBIAS, DE LEON, DAVID
Publication of US20110096997A1 publication Critical patent/US20110096997A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation

Definitions

  • a computer-readable medium may contain instructions executable by at least one processor.
  • the computer-readable medium may include one or more instructions for providing sub-images, one or more instructions for receiving selection of a plurality of sub-images, one or more instructions for constructing a first image, the first image being a unified image including the selected plurality of sub-images, one or more instructions for comparing the selected plurality of sub-images with previously selected sub-images, and one or more instructions for providing access to at least one of a device, a service, or a function when the selected plurality of sub-images match the previously selected sub-images.
  • the one or more instructions for providing sub-images may include one or more instructions for categorizing the sub-images.
  • the selected plurality of sub-images may include at least one of two sub-images of a different category or at least one sub-image that is an animation.
  • the one or more instructions for receiving selection may include one or more instructions for receiving selection of a character associated with the sub-image.
  • each sub-image includes a corresponding character
  • the unified image includes a first code comprising the characters.
  • the one or more instruction for constructing may include one or more instructions for overlaying at least one of the selected plurality of sub-images onto at least one other of the selected plurality of sub-images.
  • the selected plurality of sub-images may be segments of the first image
  • the one or more instructions for constructing may include one or more instructions for assembling the selected plurality of sub-images to form the first image based on the segmentation of each sub-image.
  • sub-images may include sub-images relating to living things, non-living things, and places.
  • the one or more instructions for constructing may include one or more instructions for overlaying the selected plurality of sub-images to corresponding specific regions on a scenic sub-image.
  • the plurality of sub-images may include a plurality of characters
  • the one or more instructions for constructing may include one or more instructions for displaying the corresponding sub-images as the plurality of characters are selected.
  • a method may include providing sub-images having a plurality of categories, selecting a plurality of sub-images, constructing a first image that includes the selected plurality of sub-images, displaying the first image in a unified image, comparing information associated with the unified image with information associated with a previously constructed unified image, and providing access to at least one of a device, a service, or a function when the information associated with the unified image matches the information associated with the previously constructed unified image.
  • the selecting may include selecting the plurality of sub-images having at least two sub-images of a different category.
  • the method may include providing characters associated with the plurality of sub-images, and where the first image includes the selected plurality of sub-images and associated characters.
  • the selecting may include selecting the plurality of sub-images based on characters associated with the plurality of sub-images.
  • the constructing may include overlaying at least one of the selected plurality of sub-images onto another of the selected plurality of sub-images. Additionally, the constructing may include constructing the first image based on the selected plurality of sub-images that correspond to segments of the first image, each segment including a sub-image and at least one character, and each segment having a specific region to occupy in the first image.
  • the sub-images may include animation and video. Additionally, the constructing may include dragging and dropping the selected sub-image in an unoccupied region of the first image to be constructed.
  • a device may include a memory to store instructions, and a processor to execute the instructions to receive selection of a plurality of sub-images and characters, construct a first image that includes the selected plurality of sub-images and a first code that includes the selected characters, compare the received selections with previously stored selections associated with an image and a code, and provide access to a function or a service of the device when the received selections match the previously stored selections.
  • the device may include a display, and the processor may execute instructions to display the first image and the first code in a unified image on the display.
  • a device may include means for providing sub-images having different categories, means for receiving selection of a plurality of sub-images, means for constructing a first image that includes the selected plurality of sub-images, means for displaying the first image in a unified image, means for comparing the selections with previously stored selections, and means for providing access to a function or a service of the device when the selections match the previously stored selections.
  • a method may include receiving input associated with constructing a first image, generating the first image based on the input, comparing the first image with a second image, and providing access to at least one of device, a service, or a function when the first image matches the second image.
  • the method may include generating a first passcode that is associated with the first image, and comparing the first passcode with a second passcode associated with the second image.
  • the receiving input associated with constructing the first image may include receiving a selection of a base sub-image to facilitate construction of the first image, and the method may further include generating a second sub-image based on the base sub-image.
  • generating the second sub-image may include receiving drawing input from a user associated with constructing the second sub-image.
  • the method may include displaying a grid that includes a plurality of coordinates in response to select of the base sub-image.
  • the providing access may include comparing coordinate information associated with the first image with coordinate information associated with the second image. Additionally, the receiving input associated with constructing the first image may include receiving a selection of a plurality of sub-images and corresponding character associated with each sub-image.
  • the providing access to at least one of the device, service, or function may include providing access to at least some of applications or functions of a mobile phone.
  • FIG. 1 is a diagram illustrating a concept described herein
  • FIG. 2 is a diagram illustrating a front view of exemplary external components of an exemplary device having image-based code capability
  • FIG. 3 is a diagram illustrating a rear view of exemplary external components of the device depicted in FIG. 2 ;
  • FIG. 4 is a diagram illustrating exemplary internal components of the device depicted in FIG. 2 ;
  • FIG. 5 is a diagram of the exemplary image-based code component depicted in FIG. 4 ;
  • FIG. 6 is a flow diagram illustrating exemplary operations for designing an image-based code
  • FIG. 7 is a flow diagram illustrating exemplary operations for providing image-based code to access a device, a service or a function
  • FIG. 8 is a flow diagram illustrating exemplary operations for designing an image to access a device, a service or a function
  • FIG. 9 is a flow diagram illustrating exemplary operations for providing an image to access a device, a service, or a function.
  • FIGS. 10-14 are diagrams illustrating exemplary screenshots for providing an image-based code.
  • image and sub-image are intended to be broadly interpreted to include any representation of graphical information (e.g., a picture, a video, an animation, etc.).
  • code is intended to be broadly interpreted to include any character string (e.g., letters, numbers, alphanumeric sequence, symbols, etc.).
  • character is intended to be broadly interpreted to include any letter, number, or symbol.
  • FIG. 1 is a diagram illustrating a concept 100 as described herein.
  • a device may include a display 104 and logic that provides for the design and/or use of an image-based code.
  • an image 102 may be displayed on display 104 of the device.
  • Image 102 may be divided into segments 106 .
  • Each segment 106 may include a sub-image of image 102 and a corresponding sub-character of code 108 .
  • a user may construct image 102 by choosing from an array of segments 106 using a tab 110 .
  • a user may select tab 110 and change the top portion of the image (i.e., a sub-image of a woman's head wearing a baseball cap) to another sub-image (e.g., a man's head wearing a fireman's helmet (not illustrated)).
  • a user may construct a unified image based on the selected segments.
  • the user's selection of the predetermined “correct” image 102 and/or code 108 may be used as a basis for security functions for device.
  • the device may require correct selection of image 102 and/or code 108 before the device will allow access to itself.
  • the image may include a layering of sub-images.
  • a sub-image may include animation and/or video.
  • a user may select a character of a code to display a sub-image. Still further, additional variations will be described below.
  • a sub-image may be of any type, such as living things, non-living things, places, shapes, symbols, etc.
  • a user may construct a unique, single image based on a plurality of sub-images that may be more memorable compared to a code.
  • a user may still be provided with a corresponding code of the image that the user may remember.
  • FIG. 2 is a diagram illustrating a front view of exemplary external components of an exemplary device having image-based code capability.
  • device 200 may include a housing 205 , a microphone 210 , a speaker 220 , a keypad 230 , function keys 240 , a display 250 , and a camera button 260 .
  • the terms device and component, as used herein, is intended to be broadly interpreted to include hardware, software, and/or a combination of hardware and software.
  • Housing 205 may include a structure configured to contain components of device 200 .
  • housing 205 may be formed from plastic and may be configured to support microphone 210 , speaker 220 , keypad 230 , function keys 240 , display 250 , and camera button 260 .
  • Microphone 210 may include any component capable of transducing air pressure waves to a corresponding electrical signal. For example, a user may speak into microphone 210 during a telephone call.
  • Speaker 220 may include any component capable of transducing an electrical signal to a corresponding sound wave. For example, a user may listen to music through speaker 220 .
  • Keypad 230 may include any component capable of providing input to device 200 .
  • Keypad 230 may include a standard telephone keypad.
  • Keypad 230 may also include one or more special purpose keys.
  • each key of keypad 230 may be, for example, a pushbutton.
  • a user may utilize keypad 230 for entering information, such as text or a phone number, or activating a special function.
  • Function keys 240 may include any component capable of providing input to device 200 .
  • Function keys 240 may include a key that permits a user to cause device 200 to perform one or more operations.
  • the functionality associated with a key of function keys 240 may change depending on the mode of device 200 .
  • function keys 240 may perform a variety of operations, such as placing a telephone call, playing various media, setting various camera features (e.g., focus, zoom, etc.) or accessing an application.
  • Function keys 240 may include a key that provides a cursor function and a select function. In one implementation, each key of function keys 240 may be, for example, a pushbutton.
  • Display 250 may include any component capable of providing visual information.
  • display 250 may be a liquid crystal display (LCD).
  • display 250 may be any one of other display technologies, such as a plasma display panel (PDP), a field emission display (FED), a thin film transistor (TFT) display, etc.
  • Display 250 may be utilized to display, for example, text, image, and/or video information.
  • Display 250 may also operate as a view finder, as will be described later.
  • Camera button 260 may be a pushbutton that enables a user to take an image.
  • Device 200 is exemplary; device 200 is intended to be broadly interpreted to include any type of electronic device where an image-based code may be utilized.
  • device 200 may include a communication device, such as a wireless telephone or a personal digital assistant (PDA), a computational device, such as a computer, an entertainment device, such as a game system, a stationary device, such as a security system, or any other type of device that includes a display in which an image-based code may be utilized.
  • PDA personal digital assistant
  • FIG. 2 illustrates exemplary external components of device 200
  • device 200 may contain fewer, different, or additional external components than the external components depicted in FIG. 2 .
  • one or more external components of device 200 may include the capabilities of one or more other external components of device 200 .
  • display 250 may be an input component (e.g., a touch screen).
  • the external components may be arranged differently than the external components depicted in FIG. 2 .
  • a user may access a function or service via a network (e.g., the Internet, a private network, a wireless network, a television network, etc.) where an image-based code may be utilized.
  • a network e.g., the Internet, a private network, a wireless network, a television network, etc.
  • an image-based code may be utilized.
  • a user may visit a Web server to gain access to a credit card account, a banking account, an e-mail account, a video rental service account, etc. based on an image-based code.
  • the concept described herein may be applied to various platforms and schemes.
  • FIG. 3 is a diagram illustrating a rear view of exemplary external components of the device depicted in FIG. 2 .
  • device 200 may include a camera 370 , a lens assembly 372 , and a flash 374 .
  • Camera 370 may include any component capable of capturing an image. Camera 370 may be a digital camera. Display 250 may operate as a view finder when a user of device 200 operates camera 370 . Camera 370 may provide for automatic and/or manual adjustment of a camera setting.
  • device 200 may include camera software that is displayable on display 250 to allow a user to adjust a camera setting. For example, a user may be able adjust a camera setting by operating function keys 240 and/or camera button 260 .
  • Lens assembly 372 may include any component capable of manipulating light so that an image may be captured.
  • Lens assembly 372 may include a number of optical lens elements.
  • the optical lens elements may be of different shapes (e.g., convex, biconvex, plano-convex, concave, etc.) and different distances of separation.
  • An optical lens element may be made from glass, plastic (e.g., acrylic), or plexiglass.
  • lens assembly 372 may be permanently fixed to camera 370 .
  • Lens assembly 372 may provide for a variable aperture size (e.g., adjustable f-number).
  • Flash 374 may include any type of light-emitting component to provide illumination when camera 370 captures an image.
  • flash 374 may be a light-emitting diode (LED) flash (e.g., white LED) or a xenon flash.
  • LED light-emitting diode
  • device 200 may include fewer, additional, and/or different components than the exemplary external components depicted in FIG. 3 .
  • device 200 may not include camera 370 and other components associated therewith.
  • one or more external components of device 200 may be arranged differently.
  • FIG. 4 is a diagram illustrating exemplary internal components of the device depicted in FIGS. 2 and 3 .
  • device 200 may include microphone 210 , speaker 220 , keypad 230 , function keys 240 , display 250 , camera button 260 , camera 370 , a memory 400 , a transceiver 420 , and a control unit 430 . No further description of microphone 210 , speaker 220 , keypad 230 , function keys 240 , display 250 , camera button 260 , and camera 370 is provided with respect to FIG. 4 .
  • Memory 400 may include any type of storing component to store data and instructions related to the operation and use of device 200 .
  • memory 400 may include a memory component, such as a random access memory (RAM), a read only memory (ROM), and/or a programmable read only memory (PROM).
  • RAM random access memory
  • ROM read only memory
  • PROM programmable read only memory
  • memory 400 may include a storage component, such as a magnetic storage component (e.g., a hard drive) or other type of computer-readable medium.
  • Memory 400 may also include an external storing component, such as a Universal Serial Bus (USB) memory stick, a digital camera memory card, and/or a Subscriber Identity Module (SIM) card.
  • USB Universal Serial Bus
  • SIM Subscriber Identity Module
  • Memory 400 may include an image-based code component 410 .
  • Image-based code component 410 may include instructions to cause device 200 to provide image-based code capability as described herein. Image-based code component 410 will be described in greater detail below.
  • Transceiver 420 may include any component capable of transmitting and receiving information.
  • transceiver 420 may include a radio circuit that provides wireless communication with a network or another device.
  • Control unit 430 may include any logic that may interpret and execute instructions, and may control the overall operation of device 200 .
  • Logic as used herein, may include hardware, software, and/or a combination of hardware and software.
  • Control unit 430 may include, for example, a general-purpose processor, a microprocessor, a data processor, a co-processor, and/or a network processor.
  • Control unit 430 may access instructions from memory 400 , from other components of device 200 , and/or from a source external to device 200 (e.g., a network or another device).
  • Control unit 430 may provide for different operational modes associated with device 200 . Additionally, control unit 430 may operate in multiple modes simultaneously. For example, control unit 430 may operate in a camera mode, a walkman mode (e.g., a music playing mode), and/or a telephone mode. In one implementation, a user may prevent access of device 200 by employing an image-based code. The image-based capability of device 200 will be described in greater detail below.
  • device 200 may include fewer, additional, and/or different components than the exemplary internal components depicted in FIG. 4 .
  • device 200 may not include transceiver 420 .
  • one or more internal components of device 200 may include the capabilities of one or more other components of device 200 .
  • transceiver 420 and/or control unit 430 may include their own on-board memory 400 .
  • FIG. 5 is a diagram of the exemplary image-based code component depicted in FIG. 4 .
  • Image-based code component 410 may include an image store 510 , an image arranger 520 , and/or an image/code comparer 530 .
  • image-based code component 410 may include a graphical user interface (GUI).
  • GUI graphical user interface
  • the GUI may include various graphical interfaces, such as icons, menus, tabs, drag-and-drop interface, etc. to permit the design and selection of an image/code pair and/or an image.
  • Image store 510 may allow a user to display various sub-images, such as living objects (e.g., people, animals, plants, etc.) or non-living objects (e.g., places, things, shapes, symbols, etc.) and characters.
  • living objects e.g., people, animals, plants, etc.
  • non-living objects e.g., places, things, shapes, symbols, etc.
  • Each of the sub-images may co-exist with one or more characters.
  • a sub-image/character pair may be fixed.
  • one of the sub-mage/character pairs is a woman's head with a baseball cap (sub-image) and the number five (character).
  • image store 510 may not allow, for example, the number five to be changed to a different number because the sub-image/character pair (i.e., the woman's head with a baseball cap and the number five) is a fixed sub-image character/pair. In other implementations, however, image store 510 may provide for the customization of sub-image/character pairs.
  • image store 510 may provide a GUI to change, for example, one sub-image/character pair to a new and different sub-image/character pair. In this way, a user may design a sub-image/character pair that includes, perhaps, a favorite sub-image with a favorite character (e.g., lucky number).
  • image store 510 may provide a GUI where sub-images may not co-exist with one or more characters. For example, a user may create an image without a corresponding code.
  • Image store 510 may also allow a user to import sub-images.
  • the GUI may import a sub-image from a resource external to device 200 , such as the Internet, or from a resource internal to device 200 , such as memory 400 .
  • an image captured by camera 370 and stored in memory 400 may be added to the image store 510 .
  • image store 510 may provide a GUI to import new characters.
  • a user may import unique symbols (e.g., Chinese characters or abstract symbols) to be associated with a sub-image.
  • Image store 510 may also allow a user to create a sub-image and/or a character.
  • image store 510 may provide tools (e.g., drawing tools, painting tools, etc.) to create a sub-image and/or a character.
  • Image store 510 may also provide a GUI to manage the size, shape and/or orientation of an imported sub-image so that the sub-image may be utilized to form an image.
  • a user may create a sub-image and/or an image utilizing, for example, a stylus, his or her finger, keys on keypad 230 , a joystick, touchpad, etc.
  • Image arranger 520 may allow the user to construct an image and a code based on the sub-images and the characters from image store 510 . That is, a user may construct a unified image based on the sub-images. Described below are two exemplary implementations that may be employed to construct an image and a code; however, other implementations may be realized.
  • an image/code pair may include a plurality of segments, where each segment includes at least a sub-image and a corresponding character.
  • the GUI of image arranger 520 may include, for example, tabs, such as tabs 110 , to select a segment, or may include a drag-and-drop interface to select a segment.
  • the segmentation of the image/code pair may or may not be uniform.
  • each segment of the image/code pair may or may not be of similar size, shape and/or orientation.
  • each segment of the image/code pair may or may not differ in contribution to the overall image and/or code. That is, for an image/code pair, one segment may include a sub-image that contributes up to fifty percent of the image, and may contribute, for example, three characters of a six character code. Conversely, one segment may include a sub-image that contributes up to ten percent of the image, and may contribute, for example, one character of a five character code.
  • an image/code pair may not be completely segmented.
  • an image/code pair may include an initial, static sub-image with no corresponding character, and the remaining portion of the image and the code may be constructed based on segments having a sub-image and a character.
  • an image/code pair may be configured as a layering of sub-images having coexisting characters.
  • image-based code component 410 may include a GUI that provides selection of a base sub-image, such as a scenic sub-image (e.g., a jungle, outer space, a room in a house, underwater in an ocean, etc.) and various sub-images to overlay (e.g., by drag-and-drop) on the scenic sub-image.
  • a scenic sub-image e.g., a jungle, outer space, a room in a house, underwater in an ocean, etc.
  • various sub-images to overlay e.g., by drag-and-drop
  • overlay sub-images may range from an overlay sub-image (e.g., a space ship or an exotic animal) that relates to the scenic sub-image (e.g., outer space or jungle), to an overlay sub-image (e.g., baseball bat or washing machine) that is relatively unrelated to the scenic sub-image (e.g., outer space or jungle).
  • the overlay sub-images may be dispersed in various regions of the base sub-image.
  • a base sub-image e.g., an abstract image, such as a colored circle
  • a region for an overlay sub-image may be fixed.
  • a base sub-image may include specific regions that accept an overlay sub-image.
  • the next region to accept an overlay sub-image may be highlighted.
  • the specific regions to be occupied may have a particular order, while in other implementations the specific regions to be occupied may not have a particular order.
  • a base sub-image may not include specific regions to be occupied. That is, an overlay sub-image may be placed anywhere on the base sub-image. In one implementation, the order in which the overlay sub-image is placed with respect to the base sub-image may change the order of the corresponding code.
  • a scenic sub-image e.g., a jungle “X”
  • may include three animals e.g., a lion “5”, a giraffe “4”, and a snake “H”.
  • the code When the order of overlaying the overlay sub-images is lion, snake, and giraffe, the code may be “X5H4”; however when the order of placing the overlay sub-images is snake, lion, and giraffe, the code may be “XH54”.
  • the image may be same (i.e., the image contains the same animals with corresponding characters and may have been placed in the same regions), the code may be different.
  • image-based code component 410 may provide a selection of a base sub-image.
  • a user may create an image by, for example, drawing the image on display 250 utilizing a stylus, tools of image store 510 , or some other input mechanism (e.g., a joystick, a touch pad, etc.).
  • a user may create and/or select a code to correspond to the image.
  • image-based code component 410 may automatically generate a code to correspond to the image.
  • a user may create an image without having a corresponding code. That is, a user may only utilize the image, for example, to gain access to device 200 , by drawing the image on display 250 .
  • Image/Code comparer 530 may include a component to compare one image-based code to another image-based code. For example, image/code comparer 530 may compare an image-based code previously stored in memory 400 to an image-based code entered by a user when trying to use device 200 . Image/Code comparer 530 may compare an image, a sub-image, a character, a code and/or information (e.g., identifiers or coordinates) associated therewith of the image-based code stored in memory 400 with an image, a sub-image, a character, a code and/or information (e.g., identifiers or coordinates) associated therewith of the image-based code entered.
  • image/code comparer 530 may compare an image, a sub-image, a character, a code and/or information (e.g., identifiers or coordinates) associated therewith of the image-based code entered.
  • image/code comparer 530 may provide, for example, an indication (e.g., a visual or an auditory cue) corresponding to the result of the comparison. Additionally, or alternatively, for example, image/code comparer 530 may not provide any indication of the result; rather, device 200 will permit access or deny access depending on the result of the comparison.
  • an indication e.g., a visual or an auditory cue
  • FIG. 5 illustrates exemplary components to provide the image-based code capability as described herein
  • device 200 may include fewer, different, or additional components than the exemplary components depicted in FIG. 5 .
  • image-based code component 410 may provide various functional capabilities for designing and employing an image-based code with, for example, device 200 .
  • a Web server employing image-based code component 410 for accessing a bank account
  • some of the functions described above may or may not be provided.
  • image-based code component 410 may not provide for importing sub-images and/or characters, or altering a sub-image/character pair.
  • FIG. 6 is a flow diagram illustrating exemplary operations for designing an image-based code.
  • Process 600 may begin with a selection of a sub-image having a corresponding character, or a selection of a character having a corresponding sub-image (Block 610 ).
  • Image-based code component 410 may provide a GUI on display 250 .
  • the GUI may provide for the selection of a sub-image and a corresponding character.
  • a character may be entered from keypad 230 and the corresponding sub-image may be displayed on display 250 .
  • the GUI of image-based code component 410 may include, for example, various menus, icons, tabs, drag-and-drop interface, etc. to permit selection of a sub-image and/or character.
  • image-based code component 410 may determine whether an image/code pair is created. For example, image-based code component 410 may determine whether the number of sub-images or characters is insufficient or whether certain sub-image regions are unoccupied. If an image/code pair is not created (Block 620 —NO), then additional selections may be needed to complete an image/code pair. When an image/code pair is created (Block 620 —YES), the image/code pair may be stored in memory 400 (Block 630 ).
  • FIG. 7 is a flow diagram illustrating exemplary operations for providing image-based code access to a device, such as device 200 .
  • Process 700 may begin with a selection of a sub-image having a corresponding character, or a selection of a character having a corresponding sub-image (Block 710 ).
  • Image-based code component 410 may provide a GUI on display 250 .
  • the GUI may provide for the selection of a sub-image and a corresponding character.
  • a character may be entered from keypad 230 and the corresponding sub-image may be displayed on display 250 .
  • the GUI of image-based code component 410 may include, for example, various menus, icons, tabs, drag-and-drop interface, etc., to permit selection of a sub-image and/or a character.
  • image-based code component 410 may determine whether an image/code pair is created. For example, image-based code component 410 may determine whether the number of sub-images or characters is insufficient, or whether certain sub-image regions are unoccupied. If an image/code pair is not created (Block 720 —NO), then additional selections may be needed to complete an image/code pair. When an image/code pair is created (Block 720 —YES), image-based code component 410 may compare the entered image/code pair with another image/code pair to determine whether a match exists (Block 730 ). For example, image-based code component 410 may make a comparison with an image/code pair stored in memory 400 . When the comparison is successful (Block 730 —YES), access to device 200 may be granted (Block 740 ); however, when the comparison is not successful (Block 730 —NO), access to device 200 may be denied (Block 750 ).
  • FIG. 8 is a flow diagram illustrating exemplary operations for designing an image-based code.
  • Process 800 may begin with creating an image having a corresponding code, or creating an image without a corresponding code (Block 810 ).
  • image-based code component 410 may provide a GUI on display 250 .
  • the GUI may include various menus, icons, tabs, and/or drawing tools, etc. to create an image and include a region on display 250 to create one or more images.
  • a user may create an image on display 250 utilizing his/her finger, a stylus, one or more keys of keypad 230 , a joystick, a touchpad, etc.
  • a user may create an image without selecting each sub-image to create the image. For example, a user may create an image merely by using his or her finger, a stylus, one or more keys, a joystick, a touchpad, etc. Additionally, or alternatively, a user may select a base sub-image (e.g., a grid) to draw on and to serve as a guide to create an image. As described above, an image may be created with or without a corresponding code.
  • a base sub-image e.g., a grid
  • a code corresponding to an image may be selected by a user. Additionally, or alternatively, a code may be automatically generated by image-based code component 410 .
  • image-based code component 410 may determine whether the image with the corresponding code is created or the image without the corresponding code is created.
  • image-based code component 410 may determine whether the image with or without the corresponding code is created based on whether a user enters the image or code (e.g., by pressing an Enter key, etc.). If the image with or without the corresponding code is not created (Block 820 —NO), then additional information (i.e., image creation or code creation) may be needed to complete the image with or without the corresponding code.
  • the image with or without the corresponding code may be stored in memory 400 (Block 830 ).
  • the stored image may include, among other things, coordinate information corresponding to the base sub-image (e.g., a grid) and/or drawing data.
  • FIG. 9 is a flow diagram illustrating exemplary operations for providing image-based code access to a device, such as device 200 .
  • Process 900 may begin with creating an image having a corresponding code, or creating an image without a corresponding code (Block 910 ).
  • Image-based code component 410 may provide a GUI on display 250 .
  • the GUI may include various menus, icons, tabs, and/or drawing tools, etc. to create an image and include a region on display 250 to create an image.
  • a user may create an image on display 250 utilizing his/her finger, a stylus, one or more keys of keypad 230 , a joystick, a touchpad, etc.
  • a user may create an image without selecting each sub-image to create the image, as described above with respect to FIG. 8 . Additionally, or alternatively, a user may select a base sub-image (e.g., a grid) to draw on and to serve as a guide to create an image.
  • a base sub-image e.g., a grid
  • an image may be created with or without a corresponding code.
  • a code corresponding to an image may be selected by a user. Additionally, or alternatively, a code may be automatically generated by image-based code component 410 .
  • image-based code component 410 may determine whether the image with the corresponding code is created or the image without the corresponding code is created. For example, image-based code component 410 may determine whether the image with or without the corresponding code is created based on whether a user enters the image or code (e.g., by pressing an Enter key, etc.). If the image with or without the corresponding code is not created (Block 920 —NO), then additional information (i.e., image creation or code creation) may be needed to complete the image with or without the corresponding code.
  • additional information i.e., image creation or code creation
  • image-based code component 410 may compare the entered image with or without the corresponding code to another image with or without a corresponding code to determine whether a match exists (Block 930 ). For example, image-based code component 410 may make a comparison with image information and/or code information stored in memory 400 . When the comparison is successful (Block 930 —YES), access to device 200 may be granted (Block 940 ); however, when the comparison is not successful (Block 930 —NO), access to device 200 may be denied (Block 950 ).
  • FIGS. 6-9 illustrate exemplary operations for permitting access to device 200 ; however, as previously mentioned, in other instances, a user may access a function or service via a network (e.g., the Internet, a public switched telephone network, a private network, a wireless network, a television network, etc.) where an image-based code may be utilized.
  • a network e.g., the Internet, a public switched telephone network, a private network, a wireless network, a television network, etc.
  • an image-based code may be utilized.
  • a user may visit a Web server to gain access to a credit card account, a banking account, an e-mail account, a video rental service account, etc. based on an image-based code.
  • the concept described herein may be applied to various platforms and schemes.
  • a user may enter only the code (when a code is created) to access device 200 in correspondence to the flow diagrams of FIGS.
  • a code one access is granted to device 200 based on an image-based code.
  • a two-level code may be employed for access to some functions.
  • the image-based code may be used to access certain applications and/or functions (e.g., placing a call), and a second code (e.g., an alphanumeric code) may be used to access other applications and/or functions (e.g., modifying a phonebook contact list).
  • FIGS. 10-14 are diagrams illustrating exemplary screenshots for providing an image-based code. The description below omits discussion relating to a user's selection of, for example, various menus, prompts, and/or graphical links to arrive at the screenshots depicted in FIGS. 10-14 . As illustrated, each screenshot is displayed on display 250 of device 200 .
  • FIG. 10 is a diagram illustrating an image/code pair based on layering of sub-images.
  • the GUI of image-based code component 410 includes a code region 1010 , an image region 1020 , and a sub-image selector region 1030 .
  • Code region 1010 indicates the corresponding characters of the sub-images in image region 1020 .
  • Sub-image selector region 1030 provides a GUI component to select various sub-images.
  • image region 1020 includes a base sub-image, such as a jungle, with a corresponding character of ⁇ (pi).
  • Various sub-images may be placed on the scenic sub-image using, for example, drag-and-drop.
  • a user may drag the sub-image of the monkey onto the sub-image of the jungle, and the letter Z may appear in the code region 1010 .
  • the user may drag the sub-image of the giraffe onto the sub-image of the jungle, and the number four may appear in the code region 1010 .
  • the user may drag the lion and the snake to image region 1020 and the corresponding letters C and E may appear in the code region 1010 .
  • the scenic sub-image does not include any specific regions where an overlay sub-image may be placed.
  • a base image may include specific regions where a sub-image may be placed.
  • FIG. 11 is a diagram illustrating a partial image/code pair based on segmentation.
  • the GUI of image-based code component 410 includes a code region 1110 , an image region 1120 , and a sub-image selector region 1130 .
  • code region 1110 include three characters (i.e., 4, A, and Z) corresponding to the three strokes in image region 1120 , and three unoccupied character regions numbered four and five.
  • Image region 1120 includes two straight lines and one curved line corresponding to the three characters, and two unoccupied segment regions numbered four and five corresponding to unoccupied character regions numbered four and five. The size and orientation of each unoccupied segment region is different, and each unoccupied segment region is highlighted.
  • unoccupied segment region four may have two corresponding characters in code region 1110
  • unoccupied segment region five has one corresponding character in code region 1110 .
  • Sub-image selector region 1130 includes a pull-down menu to select the type of image to be constructed.
  • the image to be constructed is of a symbol. So, for example, a user may select the category of the image to be designed and/or entered by using the pull-down menu in sub-image selector region 1130 . Additionally, a pull-down menu indicates the next segment to be occupied. Sub-image selector region 1130 may automatically provide sub-images that correspond with the size, shape, and orientation of, for example, the next unoccupied segment.
  • the pull-down menu may indicate segment one, and sub-image selector region 1130 may automatically provide sub-images that correspond with the first segment to be occupied.
  • a user may select a sub-image and drag the sub-image into image region 1120 and the corresponding code, such as the number 4 may appear in code region 1110 .
  • a user may have already selected sub-image/character pairs for segments one, two and three (i.e., the two straight lines and the one curved line in image region 1120 ). Segment four appears in the pull-down menu of sub-image selector region 1130 , and unoccupied segment four may be highlighted as the next unoccupied segment in image region 1120 . However, any segment may be selected using the pull-down menu. For example, if a user selects an incorrect sub-image/character pair and/or wishes to change the sub-image/character pair, the user may re-select a different sub-image/character pair.
  • FIG. 12 is a diagram illustrating a partial image/code pair based on segmentation and layering.
  • the GUI of image-based code component 410 includes a code region 1210 , an image region 1220 , and a sub-image selector region 1230 .
  • image region 1220 includes a static sub-image (i.e., a female head) without a corresponding character.
  • Sub-image selector region 1230 provides a pull-down menu indicating that a female person is selected. A user may select from various categories of images to be designed and/or entered.
  • Sub-image selector region 1230 also indicates the first of the sub-images to be used.
  • sub-image selector region 1230 indicates that sub-images of eyes, which are animated, may be selected.
  • the sub-image of eyes may be placed on the face portion of the female person. That is, the sub-image of eyes is an overlay sub-image.
  • the next type of sub-image may be provided, such as different animations of a torso (e.g., twisting, using a hula hoop, etc.).
  • the sub-image of the torso may be a segment sub-image. So, for example, when a sub-image of eyes is selected, the pull-down menu may indicate that sub-images of a torso may be selected.
  • the image/code pair may include a combination of overlay sub-images and segment sub-images. In other instances, the image/code pair may include only overlay sub-images or only segment sub-images.
  • FIG. 13 is a diagram illustrating a user-created image.
  • the GUI of image-based code component 410 may include a code region 1310 , an image region 1320 , and/or a menu region 1330 .
  • image region 1320 may allow a user to create an image.
  • display 250 is a touch screen
  • a user may utilize his/her finger, a stylus, or some other instrument to create an image, such as the T-shaped image illustrated in image region 1320 .
  • menu region 1330 a user may select a grid to use as a guide to create an image. In this way, a user may create an image simply by connecting the dots.
  • menu region 1330 may provide a selection of a grid size (e.g., 2 ⁇ 2 or 4 ⁇ 4).
  • a user may desire to select a larger grid (e.g., a 5 ⁇ 5 or a 6 ⁇ 6).
  • a larger grid may allow the user to create a more complex image, which may translate into a greater level of security.
  • a more complex image may be created using an 8 ⁇ 8 grid (e.g., a smiley face).
  • grid patterns may be utilized to provide a greater complexity and/or sophistication of the image.
  • images e.g., in terms of complexity
  • to access credit card information may require a more complex image-based code compared to other information that may be deemed less sensitive.
  • menu region 1310 may allow a user to select whether a code is to be created with the image.
  • a user may select that no code is to be created.
  • a user may create the image without creating a corresponding code.
  • a user may select that a code is created. Since the user is creating the image, without, for example, a drag-and-drop of sub-images, a user may type in a code after the image is created.
  • image-based code component 410 may generate, for example, a code of random characters, and display the characters in code region 1310 .
  • an order of strokes by which the image is created by a user may determine the code and/or whether the image is correct. That is, a user may create the same image multiple ways; however, if the order of the strokes is used by image-based code component 410 when determining whether to allow access to device 200 , only one image may allow a user to access device 200 . In each case, the user may know whether the order of strokes is being used by image-based code component 410 to allow access to device 200 . In one implementation, the order of strokes may be based on coordinates corresponding to the dots of the grid.
  • an order by which a user draws, for example, some symbol, shape, etc., using a series of dots may allow image-based code component 410 to determine an order of the strokes by which an image is created.
  • image/code comparer 530 may utilize coordinate information corresponding to the dots of the grid to determine whether an image matches a pre-stored image.
  • a user may enter a letter and/or a number in code regions 1010 , 1110 , 1210 or 1310 without selecting a sub-image or creating an image.
  • the corresponding sub-image may be provided in image regions 1020 , 1120 , 1220 , or 1320 .
  • selections from menus, etc., as described above, may be implemented as user preferences.

Abstract

A method may include receiving input associated with constructing a first image; generating the first image based on the input; comparing the first image with a second image; and providing access to at least one of a device, a service, or a function when the first image matches the second image.

Description

    BACKGROUND
  • There are countless services, functions and devices that employ some form of password or personal identification number (PIN) for their use. Given the widespread utilization of these types of codes, most users have multiple codes that they have to remember. However, it is not uncommon among users to have difficulty recalling a code. While there are a variety of reasons why users suffer from this difficulty, it would be beneficial to provide users with an alternative approach.
  • SUMMARY
  • According to one aspect, a computer-readable medium may contain instructions executable by at least one processor. The computer-readable medium may include one or more instructions for providing sub-images, one or more instructions for receiving selection of a plurality of sub-images, one or more instructions for constructing a first image, the first image being a unified image including the selected plurality of sub-images, one or more instructions for comparing the selected plurality of sub-images with previously selected sub-images, and one or more instructions for providing access to at least one of a device, a service, or a function when the selected plurality of sub-images match the previously selected sub-images.
  • Additionally, the one or more instructions for providing sub-images may include one or more instructions for categorizing the sub-images.
  • Additionally, the selected plurality of sub-images may include at least one of two sub-images of a different category or at least one sub-image that is an animation.
  • Additionally, the one or more instructions for receiving selection may include one or more instructions for receiving selection of a character associated with the sub-image.
  • Additionally, each sub-image includes a corresponding character, and the unified image includes a first code comprising the characters.
  • Additionally, the one or more instruction for constructing may include one or more instructions for overlaying at least one of the selected plurality of sub-images onto at least one other of the selected plurality of sub-images.
  • Additionally, the selected plurality of sub-images may be segments of the first image, and the one or more instructions for constructing may include one or more instructions for assembling the selected plurality of sub-images to form the first image based on the segmentation of each sub-image.
  • Additionally, there may be at least one of the selected plurality of sub-images that has a plurality of corresponding characters.
  • Additionally, the sub-images may include sub-images relating to living things, non-living things, and places.
  • Additionally, the one or more instructions for constructing may include one or more instructions for overlaying the selected plurality of sub-images to corresponding specific regions on a scenic sub-image.
  • Additionally, the plurality of sub-images may include a plurality of characters, and the one or more instructions for constructing may include one or more instructions for displaying the corresponding sub-images as the plurality of characters are selected.
  • According to another aspect, a method may include providing sub-images having a plurality of categories, selecting a plurality of sub-images, constructing a first image that includes the selected plurality of sub-images, displaying the first image in a unified image, comparing information associated with the unified image with information associated with a previously constructed unified image, and providing access to at least one of a device, a service, or a function when the information associated with the unified image matches the information associated with the previously constructed unified image.
  • Additionally, the selecting may include selecting the plurality of sub-images having at least two sub-images of a different category.
  • Additionally, the method may include providing characters associated with the plurality of sub-images, and where the first image includes the selected plurality of sub-images and associated characters.
  • Additionally, the selecting may include selecting the plurality of sub-images based on characters associated with the plurality of sub-images.
  • Additionally, the constructing may include overlaying at least one of the selected plurality of sub-images onto another of the selected plurality of sub-images. Additionally, the constructing may include constructing the first image based on the selected plurality of sub-images that correspond to segments of the first image, each segment including a sub-image and at least one character, and each segment having a specific region to occupy in the first image.
  • Additionally, the sub-images may include animation and video. Additionally, the constructing may include dragging and dropping the selected sub-image in an unoccupied region of the first image to be constructed.
  • According to yet another aspect, a device may include a memory to store instructions, and a processor to execute the instructions to receive selection of a plurality of sub-images and characters, construct a first image that includes the selected plurality of sub-images and a first code that includes the selected characters, compare the received selections with previously stored selections associated with an image and a code, and provide access to a function or a service of the device when the received selections match the previously stored selections.
  • Additionally, the device may include a display, and the processor may execute instructions to display the first image and the first code in a unified image on the display.
  • According to still another aspect, a device may include means for providing sub-images having different categories, means for receiving selection of a plurality of sub-images, means for constructing a first image that includes the selected plurality of sub-images, means for displaying the first image in a unified image, means for comparing the selections with previously stored selections, and means for providing access to a function or a service of the device when the selections match the previously stored selections.
  • According to another aspect, a method may include receiving input associated with constructing a first image, generating the first image based on the input, comparing the first image with a second image, and providing access to at least one of device, a service, or a function when the first image matches the second image.
  • Additionally, the method may include generating a first passcode that is associated with the first image, and comparing the first passcode with a second passcode associated with the second image.
  • Additionally, the receiving input associated with constructing the first image may include receiving a selection of a base sub-image to facilitate construction of the first image, and the method may further include generating a second sub-image based on the base sub-image.
  • Additionally, generating the second sub-image may include receiving drawing input from a user associated with constructing the second sub-image.
  • Additionally, the method may include displaying a grid that includes a plurality of coordinates in response to select of the base sub-image.
  • Additionally, the providing access may include comparing coordinate information associated with the first image with coordinate information associated with the second image. Additionally, the receiving input associated with constructing the first image may include receiving a selection of a plurality of sub-images and corresponding character associated with each sub-image.
  • Additionally, the providing access to at least one of the device, service, or function may include providing access to at least some of applications or functions of a mobile phone.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments described herein and, together with the description, explain these exemplary embodiments. In the drawings:
  • FIG. 1 is a diagram illustrating a concept described herein;
  • FIG. 2 is a diagram illustrating a front view of exemplary external components of an exemplary device having image-based code capability;
  • FIG. 3 is a diagram illustrating a rear view of exemplary external components of the device depicted in FIG. 2;
  • FIG. 4 is a diagram illustrating exemplary internal components of the device depicted in FIG. 2;
  • FIG. 5 is a diagram of the exemplary image-based code component depicted in FIG. 4;
  • FIG. 6 is a flow diagram illustrating exemplary operations for designing an image-based code;
  • FIG. 7 is a flow diagram illustrating exemplary operations for providing image-based code to access a device, a service or a function;
  • FIG. 8 is a flow diagram illustrating exemplary operations for designing an image to access a device, a service or a function;
  • FIG. 9 is a flow diagram illustrating exemplary operations for providing an image to access a device, a service, or a function; and
  • FIGS. 10-14 are diagrams illustrating exemplary screenshots for providing an image-based code.
  • DETAILED DESCRIPTION
  • The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following description does not limit the invention. The terms image and sub-image, as used herein, are intended to be broadly interpreted to include any representation of graphical information (e.g., a picture, a video, an animation, etc.). The term code, as used herein, is intended to be broadly interpreted to include any character string (e.g., letters, numbers, alphanumeric sequence, symbols, etc.). The term character, as used herein, is intended to be broadly interpreted to include any letter, number, or symbol.
  • Overview
  • Implementations described herein may provide an image-based code. FIG. 1 is a diagram illustrating a concept 100 as described herein. As illustrated, a device may include a display 104 and logic that provides for the design and/or use of an image-based code. For example, an image 102 may be displayed on display 104 of the device. Image 102 may be divided into segments 106. Each segment 106 may include a sub-image of image 102 and a corresponding sub-character of code 108. A user may construct image 102 by choosing from an array of segments 106 using a tab 110. For example, a user may select tab 110 and change the top portion of the image (i.e., a sub-image of a woman's head wearing a baseball cap) to another sub-image (e.g., a man's head wearing a fireman's helmet (not illustrated)). A user may construct a unified image based on the selected segments.
  • The user's selection of the predetermined “correct” image 102 and/or code 108 may be used as a basis for security functions for device. For example, the device may require correct selection of image 102 and/or code 108 before the device will allow access to itself.
  • As will be described herein, numerous variations to FIG. 1 may be employed. For example, instead of the image being segmented, the image may include a layering of sub-images. Additionally, or alternatively, a sub-image may include animation and/or video. Additionally, or alternatively, a user may select a character of a code to display a sub-image. Still further, additional variations will be described below.
  • A sub-image may be of any type, such as living things, non-living things, places, shapes, symbols, etc. However, a user may construct a unique, single image based on a plurality of sub-images that may be more memorable compared to a code. In addition, a user may still be provided with a corresponding code of the image that the user may remember.
  • Exemplary Device
  • FIG. 2 is a diagram illustrating a front view of exemplary external components of an exemplary device having image-based code capability. As illustrated, device 200 may include a housing 205, a microphone 210, a speaker 220, a keypad 230, function keys 240, a display 250, and a camera button 260. The terms device and component, as used herein, is intended to be broadly interpreted to include hardware, software, and/or a combination of hardware and software.
  • Housing 205 may include a structure configured to contain components of device 200. For example, housing 205 may be formed from plastic and may be configured to support microphone 210, speaker 220, keypad 230, function keys 240, display 250, and camera button 260.
  • Microphone 210 may include any component capable of transducing air pressure waves to a corresponding electrical signal. For example, a user may speak into microphone 210 during a telephone call. Speaker 220 may include any component capable of transducing an electrical signal to a corresponding sound wave. For example, a user may listen to music through speaker 220.
  • Keypad 230 may include any component capable of providing input to device 200. Keypad 230 may include a standard telephone keypad. Keypad 230 may also include one or more special purpose keys. In one implementation, each key of keypad 230 may be, for example, a pushbutton. A user may utilize keypad 230 for entering information, such as text or a phone number, or activating a special function.
  • Function keys 240 may include any component capable of providing input to device 200. Function keys 240 may include a key that permits a user to cause device 200 to perform one or more operations. The functionality associated with a key of function keys 240 may change depending on the mode of device 200. For example, function keys 240 may perform a variety of operations, such as placing a telephone call, playing various media, setting various camera features (e.g., focus, zoom, etc.) or accessing an application. Function keys 240 may include a key that provides a cursor function and a select function. In one implementation, each key of function keys 240 may be, for example, a pushbutton.
  • Display 250 may include any component capable of providing visual information. For example, in one implementation, display 250 may be a liquid crystal display (LCD). In another implementation, display 250 may be any one of other display technologies, such as a plasma display panel (PDP), a field emission display (FED), a thin film transistor (TFT) display, etc. Display 250 may be utilized to display, for example, text, image, and/or video information.
  • Display 250 may also operate as a view finder, as will be described later. Camera button 260 may be a pushbutton that enables a user to take an image.
  • Device 200 is exemplary; device 200 is intended to be broadly interpreted to include any type of electronic device where an image-based code may be utilized. For example, device 200 may include a communication device, such as a wireless telephone or a personal digital assistant (PDA), a computational device, such as a computer, an entertainment device, such as a game system, a stationary device, such as a security system, or any other type of device that includes a display in which an image-based code may be utilized. Accordingly, although FIG. 2 illustrates exemplary external components of device 200, in other implementations, device 200 may contain fewer, different, or additional external components than the external components depicted in FIG. 2. Additionally, or alternatively, one or more external components of device 200 may include the capabilities of one or more other external components of device 200. For example, display 250 may be an input component (e.g., a touch screen). Additionally, or alternatively, the external components may be arranged differently than the external components depicted in FIG. 2.
  • In other instances, a user may access a function or service via a network (e.g., the Internet, a private network, a wireless network, a television network, etc.) where an image-based code may be utilized. For example, a user may visit a Web server to gain access to a credit card account, a banking account, an e-mail account, a video rental service account, etc. based on an image-based code. Accordingly, the concept described herein may be applied to various platforms and schemes.
  • FIG. 3 is a diagram illustrating a rear view of exemplary external components of the device depicted in FIG. 2. As illustrated, in addition to the components previously described, device 200 may include a camera 370, a lens assembly 372, and a flash 374.
  • Camera 370 may include any component capable of capturing an image. Camera 370 may be a digital camera. Display 250 may operate as a view finder when a user of device 200 operates camera 370. Camera 370 may provide for automatic and/or manual adjustment of a camera setting. In one implementation, device 200 may include camera software that is displayable on display 250 to allow a user to adjust a camera setting. For example, a user may be able adjust a camera setting by operating function keys 240 and/or camera button 260.
  • Lens assembly 372 may include any component capable of manipulating light so that an image may be captured. Lens assembly 372 may include a number of optical lens elements. The optical lens elements may be of different shapes (e.g., convex, biconvex, plano-convex, concave, etc.) and different distances of separation. An optical lens element may be made from glass, plastic (e.g., acrylic), or plexiglass. In one implementation, lens assembly 372 may be permanently fixed to camera 370. Lens assembly 372 may provide for a variable aperture size (e.g., adjustable f-number). Flash 374 may include any type of light-emitting component to provide illumination when camera 370 captures an image. For example, flash 374 may be a light-emitting diode (LED) flash (e.g., white LED) or a xenon flash.
  • Although FIG. 3 illustrates exemplary external components, in other implementations, device 200 may include fewer, additional, and/or different components than the exemplary external components depicted in FIG. 3. For example, device 200 may not include camera 370 and other components associated therewith. In still other implementations, one or more external components of device 200 may be arranged differently.
  • FIG. 4 is a diagram illustrating exemplary internal components of the device depicted in FIGS. 2 and 3. As illustrated, device 200 may include microphone 210, speaker 220, keypad 230, function keys 240, display 250, camera button 260, camera 370, a memory 400, a transceiver 420, and a control unit 430. No further description of microphone 210, speaker 220, keypad 230, function keys 240, display 250, camera button 260, and camera 370 is provided with respect to FIG. 4.
  • Memory 400 may include any type of storing component to store data and instructions related to the operation and use of device 200. For example, memory 400 may include a memory component, such as a random access memory (RAM), a read only memory (ROM), and/or a programmable read only memory (PROM). Additionally, memory 400 may include a storage component, such as a magnetic storage component (e.g., a hard drive) or other type of computer-readable medium. Memory 400 may also include an external storing component, such as a Universal Serial Bus (USB) memory stick, a digital camera memory card, and/or a Subscriber Identity Module (SIM) card.
  • Memory 400 may include an image-based code component 410. Image-based code component 410 may include instructions to cause device 200 to provide image-based code capability as described herein. Image-based code component 410 will be described in greater detail below.
  • Transceiver 420 may include any component capable of transmitting and receiving information. For example, transceiver 420 may include a radio circuit that provides wireless communication with a network or another device.
  • Control unit 430 may include any logic that may interpret and execute instructions, and may control the overall operation of device 200. Logic, as used herein, may include hardware, software, and/or a combination of hardware and software. Control unit 430 may include, for example, a general-purpose processor, a microprocessor, a data processor, a co-processor, and/or a network processor. Control unit 430 may access instructions from memory 400, from other components of device 200, and/or from a source external to device 200 (e.g., a network or another device).
  • Control unit 430 may provide for different operational modes associated with device 200. Additionally, control unit 430 may operate in multiple modes simultaneously. For example, control unit 430 may operate in a camera mode, a walkman mode (e.g., a music playing mode), and/or a telephone mode. In one implementation, a user may prevent access of device 200 by employing an image-based code. The image-based capability of device 200 will be described in greater detail below.
  • Although FIG. 4 illustrates exemplary internal components, in other implementations, device 200 may include fewer, additional, and/or different components than the exemplary internal components depicted in FIG. 4. For example, in one implementation, device 200 may not include transceiver 420. In still other implementations, one or more internal components of device 200 may include the capabilities of one or more other components of device 200. For example, transceiver 420 and/or control unit 430 may include their own on-board memory 400.
  • FIG. 5 is a diagram of the exemplary image-based code component depicted in FIG. 4. Image-based code component 410 may include an image store 510, an image arranger 520, and/or an image/code comparer 530. Although not illustrated, image-based code component 410 may include a graphical user interface (GUI). The GUI may include various graphical interfaces, such as icons, menus, tabs, drag-and-drop interface, etc. to permit the design and selection of an image/code pair and/or an image.
  • Image store 510 may allow a user to display various sub-images, such as living objects (e.g., people, animals, plants, etc.) or non-living objects (e.g., places, things, shapes, symbols, etc.) and characters.
  • Each of the sub-images may co-exist with one or more characters. In one implementation, a sub-image/character pair may be fixed. For example, as illustrated in FIG. 1, one of the sub-mage/character pairs is a woman's head with a baseball cap (sub-image) and the number five (character). In this instance, image store 510 may not allow, for example, the number five to be changed to a different number because the sub-image/character pair (i.e., the woman's head with a baseball cap and the number five) is a fixed sub-image character/pair. In other implementations, however, image store 510 may provide for the customization of sub-image/character pairs. For example, image store 510 may provide a GUI to change, for example, one sub-image/character pair to a new and different sub-image/character pair. In this way, a user may design a sub-image/character pair that includes, perhaps, a favorite sub-image with a favorite character (e.g., lucky number). In still other implementations, image store 510 may provide a GUI where sub-images may not co-exist with one or more characters. For example, a user may create an image without a corresponding code.
  • Image store 510 may also allow a user to import sub-images. For example, the GUI may import a sub-image from a resource external to device 200, such as the Internet, or from a resource internal to device 200, such as memory 400. For example, an image captured by camera 370 and stored in memory 400 may be added to the image store 510. Additionally, or alternatively, image store 510 may provide a GUI to import new characters. For example, a user may import unique symbols (e.g., Chinese characters or abstract symbols) to be associated with a sub-image.
  • Image store 510 may also allow a user to create a sub-image and/or a character. For example, image store 510 may provide tools (e.g., drawing tools, painting tools, etc.) to create a sub-image and/or a character. Image store 510 may also provide a GUI to manage the size, shape and/or orientation of an imported sub-image so that the sub-image may be utilized to form an image. In other instances, a user may create a sub-image and/or an image utilizing, for example, a stylus, his or her finger, keys on keypad 230, a joystick, touchpad, etc.
  • Image arranger 520 may allow the user to construct an image and a code based on the sub-images and the characters from image store 510. That is, a user may construct a unified image based on the sub-images. Described below are two exemplary implementations that may be employed to construct an image and a code; however, other implementations may be realized. In a first implementation, an image/code pair may include a plurality of segments, where each segment includes at least a sub-image and a corresponding character. The GUI of image arranger 520 may include, for example, tabs, such as tabs 110, to select a segment, or may include a drag-and-drop interface to select a segment.
  • The segmentation of the image/code pair may or may not be uniform. For example, each segment of the image/code pair may or may not be of similar size, shape and/or orientation.
  • Additionally, or alternatively, each segment of the image/code pair may or may not differ in contribution to the overall image and/or code. That is, for an image/code pair, one segment may include a sub-image that contributes up to fifty percent of the image, and may contribute, for example, three characters of a six character code. Conversely, one segment may include a sub-image that contributes up to ten percent of the image, and may contribute, for example, one character of a five character code.
  • Additionally, or alternatively, the image/code pair may not be completely segmented. For example, an image/code pair may include an initial, static sub-image with no corresponding character, and the remaining portion of the image and the code may be constructed based on segments having a sub-image and a character.
  • In a second implementation, an image/code pair may be configured as a layering of sub-images having coexisting characters. For example, image-based code component 410 may include a GUI that provides selection of a base sub-image, such as a scenic sub-image (e.g., a jungle, outer space, a room in a house, underwater in an ocean, etc.) and various sub-images to overlay (e.g., by drag-and-drop) on the scenic sub-image. The selection of overlay sub-images may range from an overlay sub-image (e.g., a space ship or an exotic animal) that relates to the scenic sub-image (e.g., outer space or jungle), to an overlay sub-image (e.g., baseball bat or washing machine) that is relatively unrelated to the scenic sub-image (e.g., outer space or jungle). In one implementation, the overlay sub-images may be dispersed in various regions of the base sub-image. Additionally, or alternatively, a base sub-image (e.g., an abstract image, such as a colored circle) may provide for overlaying multiple overlay sub-images (e.g., concentric circles of different colors) in one region.
  • In one implementation, a region for an overlay sub-image may be fixed. For example, a base sub-image may include specific regions that accept an overlay sub-image. In one implementation, for example, as a specific region is occupied, the next region to accept an overlay sub-image may be highlighted. In one implementation, the specific regions to be occupied may have a particular order, while in other implementations the specific regions to be occupied may not have a particular order.
  • Additionally, or alternatively, a base sub-image may not include specific regions to be occupied. That is, an overlay sub-image may be placed anywhere on the base sub-image. In one implementation, the order in which the overlay sub-image is placed with respect to the base sub-image may change the order of the corresponding code. For example, a scenic sub-image (e.g., a jungle “X”) may include three animals (e.g., a lion “5”, a giraffe “4”, and a snake “H”).
  • When the order of overlaying the overlay sub-images is lion, snake, and giraffe, the code may be “X5H4”; however when the order of placing the overlay sub-images is snake, lion, and giraffe, the code may be “XH54”. Thus, although the image may be same (i.e., the image contains the same animals with corresponding characters and may have been placed in the same regions), the code may be different.
  • According to another implementation, as previously described, image-based code component 410 may provide a selection of a base sub-image. However, instead of a user selecting various sub-images to overlay (e.g., by drag-and-drop), a user may create an image by, for example, drawing the image on display 250 utilizing a stylus, tools of image store 510, or some other input mechanism (e.g., a joystick, a touch pad, etc.). In one implementation, a user may create and/or select a code to correspond to the image. Additionally, or alternatively, image-based code component 410 may automatically generate a code to correspond to the image. Additionally, or alternatively, a user may create an image without having a corresponding code. That is, a user may only utilize the image, for example, to gain access to device 200, by drawing the image on display 250.
  • Image/Code comparer 530 may include a component to compare one image-based code to another image-based code. For example, image/code comparer 530 may compare an image-based code previously stored in memory 400 to an image-based code entered by a user when trying to use device 200. Image/Code comparer 530 may compare an image, a sub-image, a character, a code and/or information (e.g., identifiers or coordinates) associated therewith of the image-based code stored in memory 400 with an image, a sub-image, a character, a code and/or information (e.g., identifiers or coordinates) associated therewith of the image-based code entered.
  • In one implementation, image/code comparer 530 may provide, for example, an indication (e.g., a visual or an auditory cue) corresponding to the result of the comparison. Additionally, or alternatively, for example, image/code comparer 530 may not provide any indication of the result; rather, device 200 will permit access or deny access depending on the result of the comparison.
  • Although FIG. 5 illustrates exemplary components to provide the image-based code capability as described herein, in other implementations, device 200 may include fewer, different, or additional components than the exemplary components depicted in FIG. 5. As described above, image-based code component 410 may provide various functional capabilities for designing and employing an image-based code with, for example, device 200. However, in other applications, for example, a Web server employing image-based code component 410 for accessing a bank account, some of the functions described above may or may not be provided. For example, in such an instance, image-based code component 410 may not provide for importing sub-images and/or characters, or altering a sub-image/character pair.
  • FIG. 6 is a flow diagram illustrating exemplary operations for designing an image-based code. Process 600 may begin with a selection of a sub-image having a corresponding character, or a selection of a character having a corresponding sub-image (Block 610). Image-based code component 410 may provide a GUI on display 250. The GUI may provide for the selection of a sub-image and a corresponding character. In other instances, a character may be entered from keypad 230 and the corresponding sub-image may be displayed on display 250. As mentioned above, the GUI of image-based code component 410 may include, for example, various menus, icons, tabs, drag-and-drop interface, etc. to permit selection of a sub-image and/or character.
  • In Block 620, image-based code component 410 may determine whether an image/code pair is created. For example, image-based code component 410 may determine whether the number of sub-images or characters is insufficient or whether certain sub-image regions are unoccupied. If an image/code pair is not created (Block 620—NO), then additional selections may be needed to complete an image/code pair. When an image/code pair is created (Block 620—YES), the image/code pair may be stored in memory 400 (Block 630).
  • FIG. 7 is a flow diagram illustrating exemplary operations for providing image-based code access to a device, such as device 200. Process 700 may begin with a selection of a sub-image having a corresponding character, or a selection of a character having a corresponding sub-image (Block 710). Image-based code component 410 may provide a GUI on display 250. The GUI may provide for the selection of a sub-image and a corresponding character. In other instances, a character may be entered from keypad 230 and the corresponding sub-image may be displayed on display 250. As mentioned above, the GUI of image-based code component 410 may include, for example, various menus, icons, tabs, drag-and-drop interface, etc., to permit selection of a sub-image and/or a character.
  • In Block 720, image-based code component 410 may determine whether an image/code pair is created. For example, image-based code component 410 may determine whether the number of sub-images or characters is insufficient, or whether certain sub-image regions are unoccupied. If an image/code pair is not created (Block 720—NO), then additional selections may be needed to complete an image/code pair. When an image/code pair is created (Block 720—YES), image-based code component 410 may compare the entered image/code pair with another image/code pair to determine whether a match exists (Block 730). For example, image-based code component 410 may make a comparison with an image/code pair stored in memory 400. When the comparison is successful (Block 730—YES), access to device 200 may be granted (Block 740); however, when the comparison is not successful (Block 730—NO), access to device 200 may be denied (Block 750).
  • FIG. 8 is a flow diagram illustrating exemplary operations for designing an image-based code. Process 800 may begin with creating an image having a corresponding code, or creating an image without a corresponding code (Block 810). In such an instance, image-based code component 410 may provide a GUI on display 250. The GUI may include various menus, icons, tabs, and/or drawing tools, etc. to create an image and include a region on display 250 to create one or more images. Additionally, or alternatively, a user may create an image on display 250 utilizing his/her finger, a stylus, one or more keys of keypad 230, a joystick, a touchpad, etc.
  • In each instance, a user may create an image without selecting each sub-image to create the image. For example, a user may create an image merely by using his or her finger, a stylus, one or more keys, a joystick, a touchpad, etc. Additionally, or alternatively, a user may select a base sub-image (e.g., a grid) to draw on and to serve as a guide to create an image. As described above, an image may be created with or without a corresponding code.
  • In one implementation, a code corresponding to an image may be selected by a user. Additionally, or alternatively, a code may be automatically generated by image-based code component 410.
  • In Block 820, image-based code component 410 may determine whether the image with the corresponding code is created or the image without the corresponding code is created.
  • For example, image-based code component 410 may determine whether the image with or without the corresponding code is created based on whether a user enters the image or code (e.g., by pressing an Enter key, etc.). If the image with or without the corresponding code is not created (Block 820—NO), then additional information (i.e., image creation or code creation) may be needed to complete the image with or without the corresponding code. When the image with or without the corresponding code is created (Block 820—YES), the image with or without the corresponding code may be stored in memory 400 (Block 830). In one implementation, the stored image may include, among other things, coordinate information corresponding to the base sub-image (e.g., a grid) and/or drawing data.
  • FIG. 9 is a flow diagram illustrating exemplary operations for providing image-based code access to a device, such as device 200. Process 900 may begin with creating an image having a corresponding code, or creating an image without a corresponding code (Block 910). Image-based code component 410 may provide a GUI on display 250. The GUI may include various menus, icons, tabs, and/or drawing tools, etc. to create an image and include a region on display 250 to create an image. Additionally, or alternatively, a user may create an image on display 250 utilizing his/her finger, a stylus, one or more keys of keypad 230, a joystick, a touchpad, etc. In each instance, a user may create an image without selecting each sub-image to create the image, as described above with respect to FIG. 8. Additionally, or alternatively, a user may select a base sub-image (e.g., a grid) to draw on and to serve as a guide to create an image.
  • As described above, an image may be created with or without a corresponding code. In one implementation, a code corresponding to an image may be selected by a user. Additionally, or alternatively, a code may be automatically generated by image-based code component 410.
  • In Block 920, image-based code component 410 may determine whether the image with the corresponding code is created or the image without the corresponding code is created. For example, image-based code component 410 may determine whether the image with or without the corresponding code is created based on whether a user enters the image or code (e.g., by pressing an Enter key, etc.). If the image with or without the corresponding code is not created (Block 920—NO), then additional information (i.e., image creation or code creation) may be needed to complete the image with or without the corresponding code. When the image with or without the corresponding code is created (Block 920—YES), image-based code component 410 may compare the entered image with or without the corresponding code to another image with or without a corresponding code to determine whether a match exists (Block 930). For example, image-based code component 410 may make a comparison with image information and/or code information stored in memory 400. When the comparison is successful (Block 930—YES), access to device 200 may be granted (Block 940); however, when the comparison is not successful (Block 930—NO), access to device 200 may be denied (Block 950).
  • FIGS. 6-9 illustrate exemplary operations for permitting access to device 200; however, as previously mentioned, in other instances, a user may access a function or service via a network (e.g., the Internet, a public switched telephone network, a private network, a wireless network, a television network, etc.) where an image-based code may be utilized. For example, a user may visit a Web server to gain access to a credit card account, a banking account, an e-mail account, a video rental service account, etc. based on an image-based code. Accordingly, the concept described herein may be applied to various platforms and schemes. Additionally, it will be appreciated that a user may enter only the code (when a code is created) to access device 200 in correspondence to the flow diagrams of FIGS. 6-9, or use a code one access is granted to device 200 based on an image-based code. For example, a two-level code may be employed for access to some functions. For example, the image-based code may be used to access certain applications and/or functions (e.g., placing a call), and a second code (e.g., an alphanumeric code) may be used to access other applications and/or functions (e.g., modifying a phonebook contact list).
  • EXAMPLES
  • FIGS. 10-14 are diagrams illustrating exemplary screenshots for providing an image-based code. The description below omits discussion relating to a user's selection of, for example, various menus, prompts, and/or graphical links to arrive at the screenshots depicted in FIGS. 10-14. As illustrated, each screenshot is displayed on display 250 of device 200.
  • FIG. 10 is a diagram illustrating an image/code pair based on layering of sub-images. In this example, the GUI of image-based code component 410 includes a code region 1010, an image region 1020, and a sub-image selector region 1030. Code region 1010 indicates the corresponding characters of the sub-images in image region 1020. Sub-image selector region 1030 provides a GUI component to select various sub-images.
  • For example, image region 1020 includes a base sub-image, such as a jungle, with a corresponding character of π (pi). Various sub-images may be placed on the scenic sub-image using, for example, drag-and-drop. For example, a user may drag the sub-image of the monkey onto the sub-image of the jungle, and the letter Z may appear in the code region 1010. Next, the user may drag the sub-image of the giraffe onto the sub-image of the jungle, and the number four may appear in the code region 1010. Subsequently, the user may drag the lion and the snake to image region 1020 and the corresponding letters C and E may appear in the code region 1010. In this example, the scenic sub-image does not include any specific regions where an overlay sub-image may be placed. However, in other implementations, a base image may include specific regions where a sub-image may be placed.
  • FIG. 11 is a diagram illustrating a partial image/code pair based on segmentation. In this example, the GUI of image-based code component 410 includes a code region 1110, an image region 1120, and a sub-image selector region 1130. As illustrated, code region 1110 include three characters (i.e., 4, A, and Z) corresponding to the three strokes in image region 1120, and three unoccupied character regions numbered four and five. Image region 1120 includes two straight lines and one curved line corresponding to the three characters, and two unoccupied segment regions numbered four and five corresponding to unoccupied character regions numbered four and five. The size and orientation of each unoccupied segment region is different, and each unoccupied segment region is highlighted. In this example, unoccupied segment region four may have two corresponding characters in code region 1110, while unoccupied segment region five has one corresponding character in code region 1110.
  • Sub-image selector region 1130 includes a pull-down menu to select the type of image to be constructed. In this example, the image to be constructed is of a symbol. So, for example, a user may select the category of the image to be designed and/or entered by using the pull-down menu in sub-image selector region 1130. Additionally, a pull-down menu indicates the next segment to be occupied. Sub-image selector region 1130 may automatically provide sub-images that correspond with the size, shape, and orientation of, for example, the next unoccupied segment. So, for example, when a user selects symbol as a category of the image to be designed and/or entered, the pull-down menu may indicate segment one, and sub-image selector region 1130 may automatically provide sub-images that correspond with the first segment to be occupied. A user may select a sub-image and drag the sub-image into image region 1120 and the corresponding code, such as the number 4 may appear in code region 1110.
  • As illustrated in FIG. 11, a user may have already selected sub-image/character pairs for segments one, two and three (i.e., the two straight lines and the one curved line in image region 1120). Segment four appears in the pull-down menu of sub-image selector region 1130, and unoccupied segment four may be highlighted as the next unoccupied segment in image region 1120. However, any segment may be selected using the pull-down menu. For example, if a user selects an incorrect sub-image/character pair and/or wishes to change the sub-image/character pair, the user may re-select a different sub-image/character pair.
  • FIG. 12 is a diagram illustrating a partial image/code pair based on segmentation and layering. In this example, the GUI of image-based code component 410 includes a code region 1210, an image region 1220, and a sub-image selector region 1230. In this example, image region 1220 includes a static sub-image (i.e., a female head) without a corresponding character. Sub-image selector region 1230 provides a pull-down menu indicating that a female person is selected. A user may select from various categories of images to be designed and/or entered. Sub-image selector region 1230 also indicates the first of the sub-images to be used. For example, sub-image selector region 1230 indicates that sub-images of eyes, which are animated, may be selected. The sub-image of eyes may be placed on the face portion of the female person. That is, the sub-image of eyes is an overlay sub-image. In one implementation, when eyes are selected, the next type of sub-image may be provided, such as different animations of a torso (e.g., twisting, using a hula hoop, etc.). In this instance, the sub-image of the torso may be a segment sub-image. So, for example, when a sub-image of eyes is selected, the pull-down menu may indicate that sub-images of a torso may be selected.
  • In this example, the image/code pair may include a combination of overlay sub-images and segment sub-images. In other instances, the image/code pair may include only overlay sub-images or only segment sub-images.
  • FIG. 13 is a diagram illustrating a user-created image. In this example, the GUI of image-based code component 410 may include a code region 1310, an image region 1320, and/or a menu region 1330. In this example, image region 1320 may allow a user to create an image. For example, if display 250 is a touch screen, a user may utilize his/her finger, a stylus, or some other instrument to create an image, such as the T-shaped image illustrated in image region 1320. As illustrated in menu region 1330, a user may select a grid to use as a guide to create an image. In this way, a user may create an image simply by connecting the dots.
  • Although, the T-shaped image is illustrated in FIG. 13, the types of images, symbols, abstract shapes, etc., may be created via the grid or without the grid. Menu region 1330 may provide a selection of a grid size (e.g., 2×2 or 4×4). However, in other instances, a user may desire to select a larger grid (e.g., a 5×5 or a 6×6). In such instances, a larger grid may allow the user to create a more complex image, which may translate into a greater level of security. For example, as illustrated in FIG. 14, a more complex image may be created using an 8×8 grid (e.g., a smiley face). Additionally, or alternatively, other grid patterns may be utilized to provide a greater complexity and/or sophistication of the image. In this regard, it will be appreciated that different images (e.g., in terms of complexity) may be used to provide access to a device, a function, and/or a services. For example, to access credit card information may require a more complex image-based code compared to other information that may be deemed less sensitive.
  • Additionally, or alternatively, menu region 1310 may allow a user to select whether a code is to be created with the image. In the examples of FIGS. 13 and 14, a user may select that no code is to be created. Thus, in such an instance, a user may create the image without creating a corresponding code. Additionally, or alternatively, a user may select that a code is created. Since the user is creating the image, without, for example, a drag-and-drop of sub-images, a user may type in a code after the image is created. In other instances, image-based code component 410 may generate, for example, a code of random characters, and display the characters in code region 1310. Additionally, or alternatively, similar to that previously described, an order of strokes by which the image is created by a user may determine the code and/or whether the image is correct. That is, a user may create the same image multiple ways; however, if the order of the strokes is used by image-based code component 410 when determining whether to allow access to device 200, only one image may allow a user to access device 200. In each case, the user may know whether the order of strokes is being used by image-based code component 410 to allow access to device 200. In one implementation, the order of strokes may be based on coordinates corresponding to the dots of the grid. In other words, an order by which a user draws, for example, some symbol, shape, etc., using a series of dots may allow image-based code component 410 to determine an order of the strokes by which an image is created. Additionally, or alternatively, for example, image/code comparer 530 may utilize coordinate information corresponding to the dots of the grid to determine whether an image matches a pre-stored image.
  • Although not specifically mentioned with respect to FIGS. 10-14, it will be appreciated that a user may enter a letter and/or a number in code regions 1010, 1110, 1210 or 1310 without selecting a sub-image or creating an image. In such an instance, when a user enters a character in a code region the corresponding sub-image may be provided in image regions 1020, 1120, 1220, or 1320.
  • Additionally, or alternatively, it will be appreciated with respect to FIGS. 10-14 that selections from menus, etc., as described above, may be implemented as user preferences.
  • CONCLUSION
  • The foregoing description of implementations provides illustration, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the teachings. For example, while it has been described that different images could be utilized based on different levels of security, other criteria may be utilized. For example, different days of the week may be a basis for providing a different image. In this regard, for example, an image of someone playing soccer could be the image used to provide access to device 200 during the weekend, and the image of the smiley face could be the image used during the weekdays. Thus, a user may customize different settings associated with an image-based code to provide added security, entertainment, etc.
  • It should be emphasized that the term “comprises” or “comprising” when used in the specification is taken to specify the presence of stated features, integers, steps, or components but does not preclude the presence or addition of one or more other features, integers, steps, components, or groups thereof.
  • In addition, while a series of processes and/or acts have been described herein, the order of the processes and/or acts may be modified in other implementations. Further, non-dependent processes and/or acts may be performed in parallel.
  • It will be apparent that aspects described herein may be implemented in many different forms of software, firmware, and hardware in the implementations illustrated in the figures. The actual software code or specialized control hardware used to implement aspects does not limit the invention. Thus, the operation and behavior of the aspects were described without reference to the specific software code—it being understood that software and control hardware can be designed to implement the aspects based on the description herein. No element, act, or instruction used in the present application should be construed as critical or essential to the implementations described herein unless explicitly described as such. Also, as used herein, the article “a” and “an” are intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated list items.

Claims (21)

1.-30. (canceled)
31. A method comprising:
providing, using a device, a base image comprising image items, the image items being associated with coordinate information;
receiving, using the device, selection of a first plurality of the image items, the first plurality of the image items corresponding to first coordinate information;
creating, using the device, a first image based on the received selection of the first plurality of the image items;
comparing, using the device, first information, associated with the created first image, with second information associated with a second image,
where the comparing comprises at least one of:
comparing the first coordinate information, associated with the first image, with second coordinate information associated with the second image, or
comparing first code information associated with the created first image with second code information associated with the second image; and
providing, using the device, access to the device based on a result of comparing the first information, associated with the created first image, with the second information, associated with the second image.
32. The method of claim 31, further comprising:
receiving selection of a second plurality of the image items, the second plurality of the image items corresponding to the second coordinate information;
creating the second image based on the received selection of the second plurality of the image items; and
storing the second information associated with the created second image, the second information including the second coordinate information,
where the comparing comprises:
comparing the first information, associated with the created first image, with the stored second information of the created second image.
33. The method of claim 31, further comprising:
generating the first code information based on the received selection of the first plurality of the image items; and
generating the second code information based on a selected second plurality of the image items,
where comparing the first code information with the second code information includes:
comparing the generated first code information with the generated second code information.
34. The method of claim 31, where creating the first image comprises creating the second image based on a first order of the selection of the first plurality of the image items, and
where the second image is created based on a second order of selection of a second plurality of the image items,
the method further comprising:
providing access to the device based on the first order, of the selection of the first plurality of the image items, and the second order, of the selection of the second plurality of the image items.
35. The method of claim 34, where the first plurality of the image items corresponds to the second plurality of the image items, and
where the first order, of the selection of the first plurality of the image items, differs from the second order, of the selection of the second plurality of the image items,
the method further comprising:
denying access to the device based on the first order, of the selection of the first plurality of the image items, being different than the second order, of the selection of the first plurality of the image items.
36. The method of claim 31, where providing access to the device comprises at least one of:
providing access to one or more functions of the device;
providing access to information provided by the device; or
providing access to one or more services associated with the device.
37. The method of claim 31, where the second image corresponds to a first level of complexity, the first level of complexity being associated with a first level of security, and
where providing access to the device comprises providing access to a first type of information including confidential information,
the method further comprising:
creating a third image corresponding to a second, different level of complexity, the second level of complexity being associated with a second, different level of security; and
providing, based on the created third image, access to a second, different type of information excluding the confidential information.
38. The method of claim 37, where a first size of the base image corresponds to the first level of complexity,
the method further comprising:
receiving selection of a second base image having a second, different size, the second size corresponding to the second level of complexity; and
creating the third image using the selected second base image.
39. A system comprising:
a device to:
receive selection of a base image including image items;
receive selection of a first plurality of the image items, the first plurality of the image items corresponding to first coordinate information;
create a first image based on the received selection of the first plurality of the image items;
compare first information of the created first image with stored second information of a second image,
where, when comparing, the device is to at least one of:
compare the first coordinate information with second coordinate information included in the stored second information of the second image, or
compare first code information, associated with the created first image, with second code information included in the stored second information of the second image; and
provide access to the device based on comparing the first information, of the created first image, with the stored second information, of the second image.
40. The system of claim 39, where, when providing access, the device is to at least one of:
provide access to information provided by the device;
provide access to at least one service associated with the device; or
provide access to at least one function of the device.
41. The system of claim 39, where, when providing access, the device is to:
provide access to a first function of the device;
receive the first code information; and
provide, based on the received first code information, access to a second function of the device, the second function being associated with the first function.
42. The system of claim 39, where the second image corresponds to a first level of security, and
where, when providing access, the device is to provide access to a first type of information corresponding to the first level of security,
the device further to:
create a third image corresponding to a second, different level of security; and
provide, based on the created third image, access to a second, different type of information, the second type of information including information that is more sensitive than information included in the first type of information.
43. The system of claim 42, where a quantity of the image items of the base image is associated with the first level of security,
the device further to:
receive selection of a second base image including a second, different quantity of image items, the second quantity of image items being associated with the second level of security; and
create the third image using the second base image.
44. The system of claim 42, where the second quantity of image items included the second base image is greater than the quantity of the quantity of the image items included in the base image.
45. The system of claim 39, where, when creating the first image, the device is to create the second image based on a first order of the selection of the first plurality of the image items, the second image being created, based on a second order of selection of a second plurality of the image items, prior to the first image,
the device further to:
generate the first code information based on the first order of the selection of the first plurality of the image items;
generate the second code information based on the second order of the selection of the second plurality of the image items; and
providing access to the device based on comparing the generated first code information with the generated second code information.
46. The system of claim 39, where the first order of the selection of the first plurality of the image items is different than the second order of the selection of the second plurality of the image items, and where the generated first code information is different than the generated second code information,
the device further to:
deny access to the device based on the generated first code information being different than the generated second code information.
47. A computer-readable memory device incorporating one or more instructions, executable by at least one processor, to perform a method comprising:
creating a first image based on a received user input;
comparing first information of the created first image with stored second information of a stored second image,
where the comparing comprises at least one of:
comparing first coordinate information of the created first image with second coordinate information included in the stored second information of the stored second image, or
comparing first code information generated based on the received user input with second code information included in the stored second information of the stored second image; and
providing access to the device based on a result of comparing the first information, of the created first image, with the stored second information, of the stored second image.
48. The computer-readable memory device of claim 47, where the stored second image corresponds to a first level of security, and
where, when providing access, the device is to provide access to a first type of information,
the one or more instructions to perform a method further comprising:
creating a third image corresponding to a second, different level of security; and
providing, based on the created third image, access to a second, different type of information, the second type of information including information that is less sensitive than information included in the first type of information.
49. The computer-readable memory device of claim 47, the one or more instructions to perform a method further comprising:
receiving selection of a fourth image including a first quantity of image items, the first quantity of image items corresponding to the first level of security;
creating the second image based on a selection of one or more image items of the first quantity of image items;
receiving selection of a fifth image including a second, different quantity of image items, the second quantity of image items corresponding to the second level of security; and
creating the third image based on a selection of one or more image items of the second quantity of image items.
50. The computer-readable memory device of claim 47, where creating the first image comprises creating the second image based on receiving selection of a first plurality of image items in a first order,
where creating the second image comprises receiving selection of the first plurality of image items in a second, different order,
the one or more instructions to perform a method further comprising:
denying access to the device based on the first order being different than the second order.
US12/669,008 2007-08-13 2008-02-11 Graphical image authentication Abandoned US20110096997A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/669,008 US20110096997A1 (en) 2007-08-13 2008-02-11 Graphical image authentication

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US11/837,568 US8090201B2 (en) 2007-08-13 2007-08-13 Image-based code
US12/669,008 US20110096997A1 (en) 2007-08-13 2008-02-11 Graphical image authentication
PCT/IB2008/050491 WO2009022242A1 (en) 2007-08-13 2008-02-11 Graphical image authentication

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/837,568 Continuation US8090201B2 (en) 2007-08-13 2007-08-13 Image-based code

Publications (1)

Publication Number Publication Date
US20110096997A1 true US20110096997A1 (en) 2011-04-28

Family

ID=39495001

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/837,568 Expired - Fee Related US8090201B2 (en) 2007-08-13 2007-08-13 Image-based code
US12/669,008 Abandoned US20110096997A1 (en) 2007-08-13 2008-02-11 Graphical image authentication

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/837,568 Expired - Fee Related US8090201B2 (en) 2007-08-13 2007-08-13 Image-based code

Country Status (4)

Country Link
US (2) US8090201B2 (en)
EP (1) EP2179380A1 (en)
CN (1) CN101772772A (en)
WO (1) WO2009022242A1 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100322485A1 (en) * 2009-06-18 2010-12-23 Research In Motion Limited Graphical authentication
US20120159608A1 (en) * 2010-12-16 2012-06-21 Research In Motion Limited Password entry using 3d image with spatial alignment
US20120324570A1 (en) * 2011-06-17 2012-12-20 Kenichi Taniuchi Information processor, information processing method, and computer program product
US8631487B2 (en) 2010-12-16 2014-01-14 Research In Motion Limited Simple algebraic and multi-layer passwords
US8635676B2 (en) 2010-12-16 2014-01-21 Blackberry Limited Visual or touchscreen password entry
US8650635B2 (en) 2010-12-16 2014-02-11 Blackberry Limited Pressure sensitive multi-layer passwords
US8650624B2 (en) 2010-12-16 2014-02-11 Blackberry Limited Obscuring visual login
US8661530B2 (en) 2010-12-16 2014-02-25 Blackberry Limited Multi-layer orientation-changing password
US8745694B2 (en) 2010-12-16 2014-06-03 Research In Motion Limited Adjusting the position of an endpoint reference for increasing security during device log-on
US8769641B2 (en) 2010-12-16 2014-07-01 Blackberry Limited Multi-layer multi-point or pathway-based passwords
US8769668B2 (en) 2011-05-09 2014-07-01 Blackberry Limited Touchscreen password entry
US8931083B2 (en) 2010-12-16 2015-01-06 Blackberry Limited Multi-layer multi-point or randomized passwords
US9135426B2 (en) 2010-12-16 2015-09-15 Blackberry Limited Password entry using moving images
EP2786280A4 (en) * 2011-11-30 2015-10-28 Patrick Welsch Secure authorization
US9223948B2 (en) 2011-11-01 2015-12-29 Blackberry Limited Combined passcode and activity launch modifier
US9258123B2 (en) 2010-12-16 2016-02-09 Blackberry Limited Multi-layered color-sensitive passwords
US9361447B1 (en) * 2014-09-04 2016-06-07 Emc Corporation Authentication based on user-selected image overlay effects
US9460279B2 (en) 2014-11-12 2016-10-04 International Business Machines Corporation Variable image presentation for authenticating a user
US10229260B1 (en) * 2014-03-27 2019-03-12 EMC IP Holding Company LLC Authenticating by labeling
US11017069B2 (en) * 2013-03-13 2021-05-25 Lookout, Inc. Method for changing mobile communications device functionality based upon receipt of a second code and the location of a key device
US20220207131A1 (en) * 2020-12-28 2022-06-30 Pearson Education, Inc. Secure authentication for young learners
USD969840S1 (en) 2020-12-28 2022-11-15 Pearson Education, Inc. Display screen or portion thereof with graphical user interface

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7953983B2 (en) 2005-03-08 2011-05-31 Microsoft Corporation Image or pictographic based computer login systems and methods
AU2008209429B2 (en) * 2007-01-23 2013-03-14 Carnegie Mellon University Controlling access to computer systems and for annotating media files
JP5008605B2 (en) * 2008-05-26 2012-08-22 富士フイルム株式会社 Image processing apparatus and method, and program
US8136167B1 (en) 2008-10-20 2012-03-13 Google Inc. Systems and methods for providing image feedback
US8621396B1 (en) * 2008-10-20 2013-12-31 Google Inc. Access using image-based manipulation
US8542251B1 (en) 2008-10-20 2013-09-24 Google Inc. Access using image-based manipulation
US8196198B1 (en) 2008-12-29 2012-06-05 Google Inc. Access using images
US8458485B2 (en) * 2009-06-17 2013-06-04 Microsoft Corporation Image-based unlock functionality on a computing device
US8392986B1 (en) 2009-06-17 2013-03-05 Google Inc. Evaluating text-based access strings
DE102009052174B3 (en) * 2009-11-06 2010-10-21 Christoph Althammer Method for authenticating user at computer unit i.e. data processing and/or communication device for use on e.g. software-operated device, involves evaluating sequence of traction vectors and symbol, source and target position identifiers
JP5420794B2 (en) * 2010-03-29 2014-02-19 インテル コーポレイション Method and apparatus for operation manager driven profile update
EP2386972A1 (en) * 2010-05-11 2011-11-16 Thomson Licensing A method and a device for generating a secret value
US8935767B2 (en) * 2010-05-14 2015-01-13 Microsoft Corporation Overlay human interactive proof system and techniques
EP2466521B1 (en) * 2010-12-16 2018-11-21 BlackBerry Limited Obscuring visual login
EP2466514B1 (en) * 2010-12-16 2018-11-07 BlackBerry Limited Multi-layer multi-point or randomized passwords
EP2466515B1 (en) * 2010-12-16 2018-10-31 BlackBerry Limited Multi-layer orientation-changing password
EP2466513B1 (en) * 2010-12-16 2018-11-21 BlackBerry Limited Visual or touchscreen password entry
AU2011202415B1 (en) 2011-05-24 2012-04-12 Microsoft Technology Licensing, Llc Picture gesture authentication
JP5783022B2 (en) * 2011-12-07 2015-09-24 富士通株式会社 Authentication program, authentication apparatus, and authentication method
JP5994390B2 (en) * 2012-05-24 2016-09-21 株式会社バッファロー Authentication method and wireless connection device
GB201209241D0 (en) * 2012-05-25 2012-07-04 Becrypt Ltd Computer implemented security system and method
JP6026186B2 (en) * 2012-09-07 2016-11-16 フェリカネットワークス株式会社 Information processing apparatus, information processing method, and program
IN2013CH05878A (en) * 2013-12-17 2015-06-19 Infosys Ltd
CN103810415B (en) * 2014-01-28 2016-08-03 曾立 A kind of graphical passwords guard method
US10083288B2 (en) 2014-03-25 2018-09-25 Sony Corporation and Sony Mobile Communications, Inc. Electronic device with parallaxing unlock screen and method
US9710666B2 (en) 2014-06-17 2017-07-18 Susan Olsen-Kreusch Methods and systems for user authentication in a computer system using multi-component log-ins, including image-based log-ins
US9411950B1 (en) 2014-06-17 2016-08-09 Susan Olsen-Kreusch Methods and systems for user authentication in a computer system using image-based log-ins
EP3518130A1 (en) 2018-01-30 2019-07-31 OneVisage SA Method and system for 3d graphical authentication on electronic devices
CN112287332A (en) * 2020-09-22 2021-01-29 山东师范大学 Password authentication system and method based on combination of graph and character
CN114266030B (en) * 2022-03-01 2022-06-17 北京信创数安科技有限公司 Access authentication method, device, equipment and storage medium based on two-dimensional graph

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5821933A (en) * 1995-09-14 1998-10-13 International Business Machines Corporation Visual access to restricted functions represented on a graphical user interface
US20060021024A1 (en) * 2004-07-10 2006-01-26 Samsung Electronics Co., Ltd. User certification apparatus and user certification method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6209104B1 (en) * 1996-12-10 2001-03-27 Reza Jalili Secure data entry and visual authentication system and method
US7219368B2 (en) * 1999-02-11 2007-05-15 Rsa Security Inc. Robust visual passwords
US6980081B2 (en) * 2002-05-10 2005-12-27 Hewlett-Packard Development Company, L.P. System and method for user authentication
US7243239B2 (en) * 2002-06-28 2007-07-10 Microsoft Corporation Click passwords
US7644433B2 (en) * 2002-12-23 2010-01-05 Authernative, Inc. Authentication system and method based upon random partial pattern recognition
CA2495450A1 (en) 2005-01-31 2006-07-31 Hai Tao A matrix based arrangement and method of graphical password authentication

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5821933A (en) * 1995-09-14 1998-10-13 International Business Machines Corporation Visual access to restricted functions represented on a graphical user interface
US20060021024A1 (en) * 2004-07-10 2006-01-26 Samsung Electronics Co., Ltd. User certification apparatus and user certification method

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9064104B2 (en) 2009-06-18 2015-06-23 Blackberry Limited Graphical authentication
US10325086B2 (en) * 2009-06-18 2019-06-18 Blackberry Limited Computing device with graphical authentication interface
US20120167199A1 (en) * 2009-06-18 2012-06-28 Research In Motion Limited Computing device with graphical authentication interface
US10176315B2 (en) 2009-06-18 2019-01-08 Blackberry Limited Graphical authentication
US20100322485A1 (en) * 2009-06-18 2010-12-23 Research In Motion Limited Graphical authentication
US9135426B2 (en) 2010-12-16 2015-09-15 Blackberry Limited Password entry using moving images
US8631487B2 (en) 2010-12-16 2014-01-14 Research In Motion Limited Simple algebraic and multi-layer passwords
US8650635B2 (en) 2010-12-16 2014-02-11 Blackberry Limited Pressure sensitive multi-layer passwords
US8650624B2 (en) 2010-12-16 2014-02-11 Blackberry Limited Obscuring visual login
US8661530B2 (en) 2010-12-16 2014-02-25 Blackberry Limited Multi-layer orientation-changing password
US8745694B2 (en) 2010-12-16 2014-06-03 Research In Motion Limited Adjusting the position of an endpoint reference for increasing security during device log-on
US8769641B2 (en) 2010-12-16 2014-07-01 Blackberry Limited Multi-layer multi-point or pathway-based passwords
US20120159608A1 (en) * 2010-12-16 2012-06-21 Research In Motion Limited Password entry using 3d image with spatial alignment
US8863271B2 (en) * 2010-12-16 2014-10-14 Blackberry Limited Password entry using 3D image with spatial alignment
US8931083B2 (en) 2010-12-16 2015-01-06 Blackberry Limited Multi-layer multi-point or randomized passwords
US9258123B2 (en) 2010-12-16 2016-02-09 Blackberry Limited Multi-layered color-sensitive passwords
US8635676B2 (en) 2010-12-16 2014-01-21 Blackberry Limited Visual or touchscreen password entry
US10621328B2 (en) 2010-12-16 2020-04-14 Blackberry Limited Password entry using 3D image with spatial alignment
US8769668B2 (en) 2011-05-09 2014-07-01 Blackberry Limited Touchscreen password entry
US8561171B2 (en) * 2011-06-17 2013-10-15 Kabushiki Kaisha Toshiba Information processor, information processing method, and computer program product
US20120324570A1 (en) * 2011-06-17 2012-12-20 Kenichi Taniuchi Information processor, information processing method, and computer program product
US9223948B2 (en) 2011-11-01 2015-12-29 Blackberry Limited Combined passcode and activity launch modifier
EP2786280A4 (en) * 2011-11-30 2015-10-28 Patrick Welsch Secure authorization
US11017069B2 (en) * 2013-03-13 2021-05-25 Lookout, Inc. Method for changing mobile communications device functionality based upon receipt of a second code and the location of a key device
US10229260B1 (en) * 2014-03-27 2019-03-12 EMC IP Holding Company LLC Authenticating by labeling
US10263972B1 (en) 2014-03-27 2019-04-16 EMC IP Holding Company LLC Authenticating by labeling
US9361447B1 (en) * 2014-09-04 2016-06-07 Emc Corporation Authentication based on user-selected image overlay effects
US10169564B2 (en) 2014-11-12 2019-01-01 International Business Machines Corporation Variable image presentation for authenticating a user
US9460279B2 (en) 2014-11-12 2016-10-04 International Business Machines Corporation Variable image presentation for authenticating a user
US20220207131A1 (en) * 2020-12-28 2022-06-30 Pearson Education, Inc. Secure authentication for young learners
USD969840S1 (en) 2020-12-28 2022-11-15 Pearson Education, Inc. Display screen or portion thereof with graphical user interface
US11568041B2 (en) * 2020-12-28 2023-01-31 Pearson Education, Inc. Secure authentication for young learners

Also Published As

Publication number Publication date
WO2009022242A1 (en) 2009-02-19
US20090046929A1 (en) 2009-02-19
EP2179380A1 (en) 2010-04-28
US8090201B2 (en) 2012-01-03
CN101772772A (en) 2010-07-07

Similar Documents

Publication Publication Date Title
US8090201B2 (en) Image-based code
KR102185854B1 (en) Implementation of biometric authentication
CN111274562B (en) Electronic device, method, and medium for biometric enrollment
US10176315B2 (en) Graphical authentication
KR20200001601A (en) Implementation of biometric authentication
KR102103866B1 (en) Implementation of biometric authentication
KR101885836B1 (en) Method of Providing User Certification and Additional Service Using Image Password System
CN112040145B (en) Image processing method and device and electronic equipment
CN106156562B (en) A kind of private space protective device, mobile terminal and method
JP5708419B2 (en) Image display system, learning system, image display method, and control program
CN110032849B (en) Implementation of biometric authentication
US10453261B2 (en) Method and electronic device for managing mood signature of a user
CN114968464A (en) Recent content display method, device, terminal and storage medium
KR20190033377A (en) Method and computer program for user authentication using image touch password
KR20190017315A (en) Method of Image Touch User Authentication Method and System Performing the same
KR20180134470A (en) Picture Password user Authentication
US10033859B2 (en) Mobile terminal and method of controlling same
JP2006099160A (en) Password setting device and password authentication device
KR20200136504A (en) Implementation of biometric authentication
CN104423787B (en) A kind of display methods and electronic equipment
CN117631818A (en) Interaction method, interaction device, electronic equipment and storage medium
KR20200000776A (en) Gesture user authentication method and computer program using Touch and Sliding
KR20190033697A (en) Method and Computer Program for User Authentication using Graphic Touch Password
CN108897466A (en) Input method panel method of adjustment, device, equipment and storage medium
KR20190142163A (en) Touch-password user authentication method and computer program using gesture

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY ERICSSON MOBILE COMMUNICATIONS AB, SWEDEN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MARCISZKO, TOBIAS;DE LEON, DAVID;SIGNING DATES FROM 20101108 TO 20110109;REEL/FRAME:025611/0046

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION