CN103135745A - Non-touch control method and non-touch control information device and non-touch control system based on depth images - Google Patents
Non-touch control method and non-touch control information device and non-touch control system based on depth images Download PDFInfo
- Publication number
- CN103135745A CN103135745A CN2011103820530A CN201110382053A CN103135745A CN 103135745 A CN103135745 A CN 103135745A CN 2011103820530 A CN2011103820530 A CN 2011103820530A CN 201110382053 A CN201110382053 A CN 201110382053A CN 103135745 A CN103135745 A CN 103135745A
- Authority
- CN
- China
- Prior art keywords
- forearm
- user
- locus
- information
- touch control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention discloses a non-touch control method and a non-touch control information device and a non-touch control system based on depth images. The method includes steps of detecting forearms of a user in captured onsite depth video so as to output information representing space positions and directions of the forearms of the user; and converting the information representing the space positions and the directions of the forearms of the user into commands which can be executed by a target system. Through utilization of the non-touch control method and the non-touch control information device and the non-touch control system based on the depth images, robust and reliable control to the target system can be realized in a non-touch mode.
Description
Technical field
The present invention relates to non-contact control, be specifically related to a kind of non-contact control method based on depth image and information equipment and system.
Background technology
As everyone knows, in system's control technology field, the non-contact control of intelligence is one of very promising direction.And in various non-contact control methods, those methods based on visual information are very important, and this is because visual information can provide to machine the mode in as the mankind the perception world.
In addition, due to the fast development of manufacturing technology, it is more and more cheap that cam device becomes, and performance becomes more and more stronger.Now, camera has become the standard fitting of numerous information equipments, from the mobile phone to the notebook computer, from the Automatic Teller Machine to the board, bulletin, has all assembled camera.These all provide solid foundation for the application based on visual information.But at present in many cases, camera has only played some simple effects, for example only is used to record visual information in Automatic Teller Machine.Therefore, need to develop more method based on visual information and possess the range of application of the electronic installation of camera function with expansion.
Patent documentation 1 (US5594469) has proposed a kind of control system of machine based on gesture.In this system, original receipt is caught by camera, then is decoded into image sequence.Image in image sequence is carried out background removal, thereby identification triggers posture.Determine to trigger posture with correlation technique.In case after having identified the triggering posture, follow the tracks of its motion and display controlled.
But some problems of existence in the system of patent documentation 1 are that the background segment technology removes background due to what adopt, and therefore in the situation of background complexity or variation, wrong identification can appear in system.In addition, when system started working, the user can not stand in the front of camera head, because at first system will determine background image.Otherwise, will bring the noise signal of bulk, make the background segment failure.
Summary of the invention
The objective of the invention is to propose a kind of non-contact control method based on depth image and information equipment and system.
In a first aspect of the present invention, a kind of method of controlling contactless system has been proposed, comprise step: detect user's forearm in the on-the-spot deep video of catching, so that the locus of the described user's forearm of output expression and the information of sensing; And convert the locus of described expression user forearm and the information of sensing to order that goal systems can be carried out.
In a second aspect of the present invention, a kind of information equipment has been proposed, comprising: subject detecting unit, detect user's forearm, so that the locus of the described user's forearm of output expression and the information of sensing in the on-the-spot deep video of catching; And signal conversion unit, convert the locus of described expression user forearm and the information of sensing to order that goal systems can be carried out.
In a third aspect of the present invention, a kind of contactless system has been proposed, comprise above-mentioned information equipment.
Utilize said structure of the present invention and method, can with non-contacting mode realize to the robust of goal systems and control reliably.
Description of drawings
From the detailed description below in conjunction with accompanying drawing, above-mentioned feature and advantage of the present invention will be more obvious, wherein:
Fig. 1 shows the schematic diagram according to the Touchless control system of the embodiment of the present invention;
Fig. 2 shows the schematic block diagram according to the Touchless control system of the embodiment of the present invention;
Fig. 3 is that description is according to the process flow diagram of the process of the control method of the embodiment of the present invention;
Fig. 4 is the process flow diagram of describing the process that user's forearm is detected;
Fig. 5 is the process flow diagram of describing signal conversion process; And
Fig. 6 is that description is according to the schematic diagram of a practical application of the method for the embodiment of the present invention.
Embodiment
Below, describe the preferred embodiment of the present invention in detail with reference to accompanying drawing.In the accompanying drawings, although be shown in different accompanying drawings, identical Reference numeral is used for representing identical or similar assembly.For clarity and conciseness, the detailed description that is included in the known function and structure here will be omitted, otherwise they will make theme of the present invention unclear.
Fig. 1 shows the schematic diagram according to the Touchless control system of the embodiment of the present invention.As shown in Figure 1, possess information equipment 100, degree of depth camera 110 and display screen 120 such as computing machine according to the Touchless control system of the embodiment of the present invention.According to another embodiment of the present invention, camera 110 can be integrated in information equipment 100.According to an embodiment more of the present invention, camera 110 shown in Figure 1 is schematically with the connection of information equipment 100, and does not mean that they must have line connecting relation under normal conditions, can connect by the mode that the user arranges.For example, the user can input by keyboard, the corresponding relation between the combination of existing gesture and the corresponding command is arranged, thereby set up annexation between camera and information equipment.
The certain depth image information of user's forearm, for example locus and sensing and be stored among information equipment with their corresponding control commands.When system works, the video of this user's forearm depth image will be caught by camera 110, and be imported in information equipment 100.According to another exemplifying embodiment of the present invention, degree of depth camera is infrared camera, thereby has avoided the requirement of illumination aspect.
This information equipment 100 can detect locus and the sensing of user's forearm from deep video.Then, information equipment 100 (is for example used with respect to predetermined plane according to locus and sensing that reality detects user's forearm, as display screen, space angle represents) with predefined locus with point to forearm positions and the sensing whether mate the user and convert corresponding control command to, for example left click or right click, cursor up down moves left and right etc.Like this, user 150 can brandish forearm and control the goal systems that is connected with information equipment, perhaps the goal systems in control information equipment.
Fig. 2 shows the schematic block diagram according to the Touchless control system of the embodiment of the present invention.As shown in Figure 2, when degree of depth camera 110 is brandished forearm user 150, catch on-the-spot deep video, rather than conventional video, and the deep video of catching is input in information equipment 100.Information equipment 100 detects arm from this scene deep video locus and sensing and and then produce and brandish the corresponding control command of process with this arm, send to goal systems 140, the operation of control goal systems.As mentioned above, the goal systems 140 here can be also the part of information equipment.
Locus and sensing and the corresponding control command of predefined user's forearm have been stored in the storage unit of information equipment 105, such as ' left button click ', ' clicking by right key ' and ' double-click ' etc.
As shown in Figure 2, the forearm detecting unit 101 that is equipped with in information equipment 100 receives the degree of depth from the on-the-spot deep video of camera, and each the frame degree of depth picture that comes the matched field video by image or model with predefined user's forearm.If there is the picture of coupling, think the image that has user's forearm in this scene deep video, and and then locus and the sensing of definite user's forearm.
In addition, be equipped with signal conversion unit 104 in information equipment 100, it converts locus and the sensing that detects to be fit to goal systems 140 execution order.
As shown in Figure 2, also be equipped with in information equipment 100 and facilitate user 150 to carry out the definition unit 106 of self-defined control command.Own distinctive forearm is brandished the position and when pointing to, take the depth image of these own forearms by camera 100 when user 150 will define, and determines locus and the sensing of forearm, and they and corresponding control command are existed in storage unit 105.
Below in conjunction with process flow diagram, specific operation process according to the unit of the control method of the embodiment of the present invention and information equipment is described.Fig. 3 is that description is according to the process flow diagram of the process of the control method of the embodiment of the present invention.
At step S31, user 150 brandishes forearm before degree of depth camera, so that the action of the cursor on the screen of control information equipment 100.At step S32, the degree of depth camera 110 in for example computing machine of information equipment 100 is caught the live video of forearm, and is entered in the forearm detecting unit 101 of information equipment.
Next, at step S33, forearm detecting unit 101 receives the live video from camera, detects forearm and determines locus and the sensing of user's forearm by the analysis depth image.If have forearm and obtained the locus and sensing, at step S34, signal conversion unit 104 converts locus and the sensing that detects to corresponding control command.At step S35, control command is sent to goal systems, and it is controlled.
For example, the forearm detecting unit judges certain position on its orientation such as screen after the locus and sensing of forearm being detected, cursor is moved on to this position, or click the icon of this position, thereby realization is to the control of goal systems.According to embodiments of the invention, can also use for example image of people's face, eyes or the nose judgement that is used as assisting of extra information.Subject detecting unit can detect with Adaboost algorithm or other object recognition algorithm may locus and the sensing of user's forearm and this forearm from depth image.
Fig. 4 is the process flow diagram of describing the process that forearm is detected.As shown in Figure 4, at step S41, the technology that forearm detecting unit 101 utilizes such as stencil matching or Texture Matching, according to the forearm template detection live video of catching of storage in storage unit 105 and determine locus and the sensing of the forearm that detects, and in step S42 judgement, whether forearm is arranged wherein.If no, flow process forwards step S41 to, continue to detect, otherwise, at step S43, locus and the sensing of forearm detecting unit 101 output forearms.
As mentioned above, before carrying out the forearm detection, may need predefined with specific forearm locus and point to corresponding control command.
Fig. 5 is the process flow diagram of describing signal conversion process.From locus and the sensing corresponding to user's 150 forearms of the signal of forearm detecting unit 101 output.But they still can not be carried out by goal systems 140, because goal systems 140 can not be understood these actions.Therefore, in signal conversion unit 104, at step S71, obtain and analyze the forearm locus that obtains and point to corresponding signal, and convert all signals to suitable order at step S72.After this, at step S73, signal conversion unit 104 will be ordered output.
Fig. 6 is that description is according to the schematic diagram of a practical application of the method for the embodiment of the present invention.As shown in the figure, forearm locus and sensing that forearm detecting unit 101 detects show that user's forearm has pointed to " making progress " hurdle in the menu on the screen, and cursor moves on to this hurdle and clicks, thereby have realized the operation to menu item.Then, the user moves forearm and points to and to close, and has pointed to and closes in case forearm detecting unit 101 detects user's forearm, and signal conversion unit 104 converts the locus of representative of consumer forearm and the signal of sensing to close goal systems order.
As mentioned above, equipment of the present invention and method can be used for camera support information device, for example: desktop PC, above-knee PC, mobile phone, PDA, electronic whiteboard, remote control, supervising device etc.
Top description only is used for realizing embodiments of the present invention; it should be appreciated by those skilled in the art; the any modification or partial replacement that is not departing from the scope of the present invention; all should belong to claim of the present invention and come restricted portion; therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.
Claims (8)
1. method of controlling contactless system comprises step:
Detect user's forearm in the on-the-spot deep video of catching, so that the locus of the described user's forearm of output expression and the information of sensing; And
Convert the locus of described expression user forearm and the information of sensing to order that goal systems can be carried out.
2. the method for claim 1, wherein said object detection step comprises:
User's forearm template of reading pre-stored;
User's forearm template and each frame in the live video of catching are carried out matching treatment;
In the situation that mate, export locus and the directional information of the forearm in each frame in described live video.
3. method as claimed in claim 2 before also being included in the step that detects forearm, defining with the locus of forearm and points to the step of corresponding control command according to user's needs.
4. information equipment comprises:
Subject detecting unit detects user's forearm in the on-the-spot deep video of catching, so that the locus of the described user's forearm of output expression and the information of sensing; And
Signal conversion unit converts the locus of described expression user forearm and the information of sensing to order that goal systems can be carried out.
5. information equipment as claimed in claim 4, user's forearm template of wherein said forearm detecting unit reading pre-stored, user's forearm template and each frame in the live video of catching are carried out matching treatment, and in the situation that mate, export locus and the directional information of the forearm in each frame in described live video.
6. information equipment as claimed in claim 4, wherein said on-the-spot deep video is caught by infrared camera.
7. information equipment as claimed in claim 5 comprises that also needs according to the user define with the locus of forearm and point to the definition unit of corresponding control command.
8. a contactless system, comprise information equipment as described in one of claim 4~7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110382053.0A CN103135745B (en) | 2011-11-25 | 2011-11-25 | Non-contact control method, information equipment and system based on depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110382053.0A CN103135745B (en) | 2011-11-25 | 2011-11-25 | Non-contact control method, information equipment and system based on depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103135745A true CN103135745A (en) | 2013-06-05 |
CN103135745B CN103135745B (en) | 2018-01-02 |
Family
ID=48495687
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110382053.0A Expired - Fee Related CN103135745B (en) | 2011-11-25 | 2011-11-25 | Non-contact control method, information equipment and system based on depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103135745B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106598422A (en) * | 2016-10-25 | 2017-04-26 | 深圳奥比中光科技有限公司 | Directivity-based control and hybrid control methods, control system and electronic equipment |
CN111191083A (en) * | 2019-09-23 | 2020-05-22 | 牧今科技 | Method and computing system for object identification |
CN111601129A (en) * | 2020-06-05 | 2020-08-28 | 北京字节跳动网络技术有限公司 | Control method, control device, terminal and storage medium |
US11763459B2 (en) | 2019-09-23 | 2023-09-19 | Mujin, Inc. | Method and computing system for object identification |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101699370A (en) * | 2009-11-10 | 2010-04-28 | 北京思比科微电子技术有限公司 | Depth detection based body identification control device |
CN102184020A (en) * | 2010-05-18 | 2011-09-14 | 微软公司 | Method for manipulating posture of user interface and posture correction |
US20110289455A1 (en) * | 2010-05-18 | 2011-11-24 | Microsoft Corporation | Gestures And Gesture Recognition For Manipulating A User-Interface |
-
2011
- 2011-11-25 CN CN201110382053.0A patent/CN103135745B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101699370A (en) * | 2009-11-10 | 2010-04-28 | 北京思比科微电子技术有限公司 | Depth detection based body identification control device |
CN102184020A (en) * | 2010-05-18 | 2011-09-14 | 微软公司 | Method for manipulating posture of user interface and posture correction |
US20110289455A1 (en) * | 2010-05-18 | 2011-11-24 | Microsoft Corporation | Gestures And Gesture Recognition For Manipulating A User-Interface |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106598422A (en) * | 2016-10-25 | 2017-04-26 | 深圳奥比中光科技有限公司 | Directivity-based control and hybrid control methods, control system and electronic equipment |
CN111191083A (en) * | 2019-09-23 | 2020-05-22 | 牧今科技 | Method and computing system for object identification |
US11763459B2 (en) | 2019-09-23 | 2023-09-19 | Mujin, Inc. | Method and computing system for object identification |
CN111601129A (en) * | 2020-06-05 | 2020-08-28 | 北京字节跳动网络技术有限公司 | Control method, control device, terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN103135745B (en) | 2018-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104956292B (en) | The interaction of multiple perception sensing inputs | |
CN107643828B (en) | Vehicle and method of controlling vehicle | |
US8897490B2 (en) | Vision-based user interface and related method | |
CN102339125A (en) | Information equipment and control method and system thereof | |
US10579152B2 (en) | Apparatus, method and recording medium for controlling user interface using input image | |
CN108845668B (en) | Man-machine interaction system and method | |
US20140071042A1 (en) | Computer vision based control of a device using machine learning | |
US20150077329A1 (en) | Eye tracking-based user interface method and apparatus | |
CN110209273A (en) | Gesture identification method, interaction control method, device, medium and electronic equipment | |
US20110250929A1 (en) | Cursor control device and apparatus having same | |
CN103929603A (en) | Image Projection Device, Image Projection System, And Control Method | |
CN103135745A (en) | Non-touch control method and non-touch control information device and non-touch control system based on depth images | |
CN103135746A (en) | Non-touch control method and non-touch control system and non-touch control device based on static postures and dynamic postures | |
US10444852B2 (en) | Method and apparatus for monitoring in a monitoring space | |
CN110531843A (en) | The method of controlling operation thereof of electronic device and electronic device | |
CN101807111A (en) | Information apparatus, control method and system thereof | |
CN103513762A (en) | Operating method using user gesture and digital device thereof | |
EP1892608A1 (en) | Gesture recognition system | |
CN114333056A (en) | Gesture control method, system, equipment and storage medium | |
US20150117712A1 (en) | Computer vision based control of a device using machine learning | |
Kavitha et al. | Simulated Mouse Mechanism using Hand Movement Signals | |
KR102369621B1 (en) | Method, apparatus and recovering medium for controlling user interface using a input image | |
KR102289497B1 (en) | Method, apparatus and recovering medium for controlling user interface using a input image | |
Singh et al. | Gesture control algorithm for personal computers | |
Kim et al. | Long-range touch gesture interface for Smart TV |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180102 Termination date: 20181125 |
|
CF01 | Termination of patent right due to non-payment of annual fee |