US20140180692A1 - Intent mining via analysis of utterances - Google Patents

Intent mining via analysis of utterances Download PDF

Info

Publication number
US20140180692A1
US20140180692A1 US14/184,379 US201414184379A US2014180692A1 US 20140180692 A1 US20140180692 A1 US 20140180692A1 US 201414184379 A US201414184379 A US 201414184379A US 2014180692 A1 US2014180692 A1 US 2014180692A1
Authority
US
United States
Prior art keywords
utterance
word
words
utterances
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/184,379
Inventor
Sachindra Joshi
Shantanu Godbole
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nuance Communications Inc
Original Assignee
Nuance Communications Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nuance Communications Inc filed Critical Nuance Communications Inc
Priority to US14/184,379 priority Critical patent/US20140180692A1/en
Assigned to NUANCE COMMUNICATIONS, INC. reassignment NUANCE COMMUNICATIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GODBOLE, SHANTANU, JOSHI, SACHINDRA
Publication of US20140180692A1 publication Critical patent/US20140180692A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding

Definitions

  • Speech recognition systems are highly complex and operate by matching an acoustic signature of an utterance with acoustic signatures of words in a language model.
  • a microphone first converts a received acoustic signature of an uttered word into an electrical signal.
  • An A/D (analog-to-digital) converter is typically used to convert the electrical signal into a digital representation of the uttered word.
  • a digital signal processor converts the captured electrical signal from the time domain to the frequency domain.
  • the digital signal processor breaks down the utterance into its spectral components.
  • the amplitude or intensity of the digital signal at various frequencies and temporal locations are then compared to a language model to determine the one or more words that were uttered.
  • a conventional speech recognition system can receive and convert an utterance into respective text.
  • a conventional speech recognition system can be configured to classify the utterance based on a key word in the utterance.
  • Conventional speech recognition systems can suffer from drawbacks. For example, conventional speech recognition typically analyze an utterance for presence of words that indicate a likely class to which the utterance belongs. However, the accuracy of the classification can be quite low because an utterance may include many words, making the intended meaning of the utterance difficult to determine.
  • Embodiments herein deviate with respect to conventional speech recognition systems to provide accurate speech utterance classification.
  • one embodiment herein includes a speech processing system to identify one or more intended meanings of a received utterance based on word groupings derived from words detected in the utterance.
  • the speech processing system tags the utterance with respective one or more tags indicative of one or more classes believed to represent an intended meaning of the utterance.
  • a speech processing system can include a syntactic parser, a word extractor, word pattern rules, and an analyzer. To identify an intended general meaning of the utterance, the speech processing system performs a number of tasks on a received utterance.
  • the syntactic parser of the speech processing system parses a received utterance to identify syntactic relationships amongst words in the utterance.
  • the word extractor creates sets of words using words in the utterance based at least in part on the syntactic relationships identified by the parser.
  • the word extractor can utilize the word pattern rules to identify groupings of related words in the utterance that most likely represent an intended meaning of the utterance.
  • the pattern rules specify which type and/or location of related words in the utterance that are to be used to create the sets of words. Accordingly, embodiments herein can include creating the sets of words can include utilizing the identified syntactic relationships of words to identify groupings of related words in the utterance; and applying a set of pattern rules or word extraction rules to the identified syntactic relationships to identify types and/or locations of words in the utterance to create the sets of words.
  • an analyzer in the speech processing system maps each set of the sets of words produced by the word extractor to a respective candidate intent value to produce a list of candidate intent values for the utterance.
  • the received utterance can be mapped multiple candidate intent values.
  • a candidate intent value is a possible intended meaning of the utterance.
  • An utterance can have one or more intended meanings; the speech processing system can be configured to identify one or more most likely intended meanings of the utterance under test.
  • the analyzer is configured to select, from the list of possible intended meanings of the utterance, a particular candidate intent value as being representative of the intent (i.e., intended meaning) of the utterance.
  • the speech processing system can be configured to maintain statistical information for a pool of previously received utterances to determine the meaning of future utterances.
  • the statistical information can indicate a frequency of receiving utterances of different intent types.
  • the statistical information can be updated over time to reflect that a most recently received and analyzed utterance was assigned the particular candidate intent value as discussed above.
  • Selection of particular intent value amongst a group of possible intents for an utterance can be based on selection criteria employed by the analyzer. For example, in one embodiment, the analyzer identifies a frequency of occurrence that utterances in a pool of previously received utterances were of a same intent type as that of a first candidate intent value for the newly received utterance; the analyzer also identifies a frequency of occurrence that utterances in the pool of previously received utterances were of a same intent type as that of the second candidate intent value for the newly received utterance; and so on. As previously discussed, the analyzer can identify multiple possible classes in which to categorizes or classify the received utterance.
  • the analyzer selects the a particular candidate intent value for assigning to the utterance depending on which of the possible candidate intent values (e.g., first candidate intent value, second candidate intent value, etc.) occurred more often in the pool for the previously received utterances.
  • the analyzer can perform a frequency analysis and then sort the candidate meanings.
  • the selected candidate value indicates a likely dominant subject matter or theme of the utterance.
  • the speech processing system includes a set of tags or labels.
  • a tagging resource (such as the analyzer or other suitable resource) identifies an appropriate tag that is representative of the intent value (or selected meaning) selected for the utterance. The tagging resource then tags the utterance with the appropriate tag to indicate a likely dominant subject matter intended by words in the received utterance. Accordingly, embodiments herein can include classifying one or more received utterances using tags.
  • embodiments herein can include a configuration of one or more computerized devices, workstations, handheld or laptop computers, or the like to carry out and/or support any or all of the method operations disclosed herein.
  • one or more computerized devices or processors can be programmed and/or configured to operate as explained herein to carry out different embodiments of the invention.
  • One such embodiment comprises a computer program product including a non-transitory computer-readable storage medium on which software instructions are encoded for subsequent execution.
  • the instructions when executed in a computerized device having a processor, program and/or cause the processor to perform the operations disclosed herein.
  • Such arrangements are typically provided as software, code, instructions, and/or other data (e.g., data structures) arranged or encoded on a non-transitory computer readable storage medium such as an optical medium (e.g., CD-ROM), floppy disk, hard disk, memory stick, etc., or other a medium such as firmware or microcode in one or more ROM, RAM, PROM, etc., or as an Application Specific Integrated Circuit (ASIC), etc.
  • the software or firmware or other such configurations can be installed onto a computerized device to cause the computerized device to perform the techniques explained herein.
  • one particular embodiment of the present disclosure is directed to a computer program product that includes a computer readable storage medium having instructions stored thereon for speech recognition such as converting of an utterance to corresponding text.
  • the instructions when executed by a processor of a respective computer device, cause the processor to: parse an utterance to identify syntactic relationships amongst words in the utterance; create or group sets of words from the utterance based at least in part on the syntactic relationships; map each set of the sets of words to a respective candidate intent value; produce a list of candidate intent values for the utterance based on the mapping; and select, from the list, a candidate intent value as being representative of an intent of the utterance.
  • system, method, apparatus, instructions on computer readable storage media, etc., as discussed herein can be embodied strictly as a software program, as a hybrid of software and hardware, or as hardware alone such as within a processor, or within an operating system or a within a software application.
  • Example embodiments of the invention may be implemented within products and/or software applications such as those manufactured by Nuance Communications, Inc., Burlington, Mass., USA.
  • FIG. 1 is an example block diagram of a speech processing system to perform intent mining for one or more utterances according to embodiments herein.
  • FIG. 2 is an example diagram illustrating a sample utterance and corresponding relationship information generated by a syntactic parser according to embodiments herein.
  • FIG. 3 is an example diagram illustrating a sample utterance and corresponding relationship information generated by a syntactic parser according to embodiments herein.
  • FIG. 4 is an example diagram illustrating pattern or word extraction rules according to embodiments herein.
  • FIG. 5 is an example diagram illustrating a listing of different utterances that have been classified under a representative intent value according to embodiments herein.
  • FIG. 6 is an example diagram illustrating a listing of different utterances that have been classified under a representative intent value according to embodiments herein.
  • FIG. 7 is an example diagram illustrating a listing of different utterances that have been classified under a representative intent value according to embodiments herein.
  • FIG. 8 is an example diagram illustrating a listing of different utterances that have been classified under a representative intent value according to embodiments herein.
  • FIG. 9 is an example diagram illustrating sets of words extracted from utterances and corresponding frequency information of occurrence according to embodiments herein.
  • FIG. 10 is an example diagram illustrating application of intent mining to an example according to embodiments herein.
  • FIG. 11 is an example diagram illustrating intent values (e.g., representative word groupings) assigned to a respective subject matter label according to embodiments herein.
  • FIG. 12 is an example diagram illustrating intent values and corresponding assigned subject matter labels according to embodiments herein.
  • FIG. 13 is a diagram illustrating an example computer architecture for executing a speech processing system according to embodiments herein.
  • FIG. 14 is a flowchart illustrating an example method of implementing a speech processing system according to embodiments herein.
  • FIGS. 15 and 16 combine to form a flowchart illustrating an example method of implementing speech processing system according to embodiments herein.
  • a speech processing system includes a syntactic parser, a word extractor, word extraction rules, and an analyzer.
  • the syntactic parser of the speech processing system parses the utterance to identify syntactic relationships amongst words in the utterance.
  • the word extractor utilizes the word extraction rules to identify groupings of related words in the utterance that most likely represent an intended meaning of the utterance.
  • the analyzer in the speech processing system maps each set of the sets of words produced by the word extractor to a respective candidate intent value to produce a list of candidate intent values for the utterance.
  • the candidate intent values represent a possible intended meaning of the utterance.
  • the analyzer is configured to select, from the list of candidate intent values (i.e., possible intended meanings) of the utterance, a particular candidate intent value as being representative of a dominant intent of the utterance.
  • FIG. 1 is an example diagram illustrating a speech processing system 100 according to embodiments herein.
  • speech processing system 100 includes a syntactic parser 115 , word extractor 140 , and an analyzer 150 .
  • the speech processing system 100 can receive one or more utterances 110 - 1 , 110 - 2 , 100 - 3 , . . . (collectively, utterances 110 ) in response to a query such as “How may I assist you?”
  • one embodiment herein includes a speech processing system 100 to identify one or more intended meanings of a received utterance 110 .
  • the speech processing system 100 can tag the utterance with respective one or more tags indicative of one or more classes that appear to best represent an intended meaning of the utterance 110 .
  • the speech processing system 110 performs a number of tasks on a received utterance.
  • the syntactic parser 115 of the speech processing system 100 converts the utterance to respective text and parses the text in the utterance 110 to produce syntactic relationship information 120 .
  • the syntactic relationship information 120 indicates syntactic relationships amongst the text-based words in the utterance 110 .
  • the word extractor 140 creates sets of words (e.g., candidate word groupings 145 ) using words in the utterance 110 based at least in part on the syntactic relationship information 120 identified by the syntactic parser 115 .
  • the word extractor 140 can utilize the word extraction rules 125 to identify one or more groupings of related words in the utterance that most likely represent an intended meaning of the utterances.
  • the word extraction rules 125 includes patterns and/or templates that indicate, for example, which types of words, locations of words, etc., in the utterance will be used to create respective word groupings.
  • the word extraction rules 125 specify which type and/or location of related words in the utterances are to be used to create the sets of words for a respective utterance. Accordingly, embodiments herein include creating the sets of words for a respective utterance under test can include utilizing the identified syntactic relationship information 120 to identify groupings of related words in the respective utterance; and applying a set of word extraction rules 125 to the identified syntactic relationships and utterance under test to identify types and/or locations of words in the utterance to create the sets of words.
  • an analyzer 150 in the speech processing system 150 maps each set of the sets of words (e.g., candidate word groupings 145 ) produced by the word extractor 140 to a respective candidate intent value to produce a list of candidate intent values for the utterance.
  • a candidate intent value is a possible intended meaning of the utterance.
  • a candidate word grouping produced by the speech processing system 100 for a respective utterance can include any number of words such as a single word or two or more words.
  • the word extractor 140 can produce one or multiple candidate word groupings 145 for each utterance.
  • An utterance can have one or more intended meanings; the speech processing system 100 can be configured to identify one or more most likely intended meanings of an utterance under test. If there is only one word grouping and corresponding candidate intent value produced for an utterance under test, then the analyzer can assign the single intended meaning of the single candidate intent value to the utterance.
  • the analyzer 150 applies further processing to narrow down the multiple possible intended meanings of the utterance to a at least most likely representative of the utterance. For example, in one embodiment, the analyzer is configured to select, from the list of possible intended meanings of the utterance as indicated by the candidate word groupings 145 , a particular candidate intent value as being representative of the intent (i.e., intended meaning) of the utterance.
  • This technique of processing can be applied to each of the received utterances 110 .
  • the speech processing system 100 can be configured to classify the received utterances based on intended meaning.
  • the speech processing system 10 can be configured to maintain statistical information 160 for a pool of previously received utterances.
  • the statistical information 160 can indicate an historical frequency of receiving utterances of different intent types.
  • the statistical information 160 can be updated over time to reflect that a most recently received and analyzed utterance was assigned a particular candidate intent value.
  • the statistical information 160 can be configured to keep track of how often the speech processing system 100 receives utterances of each particular intent type.
  • Selection of an intent value amongst a group of possible intents to assign to a newly received utterance under test can be based on selection criteria employed by the analyzer 150 .
  • the analyzer 150 selects the a particular candidate intent value for assigning to the utterance depending on which of the possible candidate intent values (e.g., first candidate intent value, second candidate intent value, etc.) occurred more often in the pool for the previously received utterances as specified by the statistical information 160 .
  • the analyzer 150 can perform a frequency analysis and then sort the candidate meanings to perform intent mining.
  • the selected candidate value indicates a likely dominant subject matter or theme of the utterance.
  • One embodiment includes so-called clustering.
  • the speech processing system 100 can include a set of tags for assigning to the utterances.
  • a tagging resource (such as the analyzer 150 or other suitable resource) identifies an appropriate tag that is representative of the intent value (or selected meaning) for an utterance. The tagging resource then tags the utterance with the appropriate tag to indicate a likely dominant subject matter intended by words in the received utterance. Accordingly, embodiments herein can include classifying one or more received utterances and then applying tags to identify a respective intended meaning.
  • FIG. 2 is an example diagram illustrating a sample utterance and corresponding relationship information 120 according to embodiments herein.
  • a Link Grammar parser can be used to parse the received utterance 110 - 1 .
  • FIG. 3 is an example diagram illustrating a sample utterance and corresponding relationship information according to embodiments herein.
  • FIG. 4 is an example diagram illustrating word extraction rules 125 according to embodiments herein.
  • the word extraction rules 125 are used to identify the location of different types of words in a received utterance that are to be used to generate respective candidate word groupings 145 .
  • the word extractor 140 applies these rules to produce candidate word groupings 145 .
  • the word extractor 140 applies word extraction rule 125 - 1 ( FIG. 4 ) to example utterance 110 - 1 “I need to change my next order.”
  • a given utterance can include one candidate word grouping or multiple candidate word groupings.
  • FIG. 5 is an example diagram illustrating a listing of different example utterances that have been classified under a respective intent value (e.g., speak_representative) according to embodiments herein. That is, processing of each of the example utterances in FIG. 5 via the speech processing system 100 would produce the candidate word grouping 145 - 1 (e.g., a candidate intent value indicating a possible intended meaning of the utterance) for the respective utterance. For example, during processing, syntactic parser 115 parses a respective utterance to produce corresponding syntactic relationship information 120 . The word extractor 140 then utilizes the respective word extraction rules 125 to identify that the respective utterance includes the respective candidate word grouping 145 - 1 speak_representative.
  • a respective intent value e.g., speak_representative
  • FIG. 6 is an example diagram illustrating a listing of different utterances that have been classified under a respective intent value (e.g., cancel_delivery) according to embodiments herein. That is, processing of the example utterances 110 in FIG. 6 via the speech processing system 100 would produce the candidate word grouping 145 - 2 (e.g., a candidate intent value indicating a possible intended meaning of the utterance) for the respective utterance. For example, during processing, syntactic parser 115 would parse the utterances to produce corresponding syntactic relationship information. The word extractor 140 then utilizes the respective word extraction rules 125 to identify that each of the utterances 110 as shown include the respective candidate word groupings 145 - 1 .
  • a respective intent value e.g., cancel_delivery
  • FIG. 7 is an example diagram illustrating a listing of different utterances that have been classified under a respective intent value (e.g., pay_bill) according to embodiments herein. That is, processing of each of the example utterances 110 in FIG. 7 via the speech processing system 100 would produce the candidate word grouping 145 - 3 (e.g., a candidate intent value indicating a possible intended meaning of the utterance) for the respective utterance. During processing, syntactic parser 115 parses a respective utterance to produce corresponding syntactic relationship information. The word extractor 140 then utilizes the respective word extraction rules 125 to identify that the respective utterance includes the respective candidate word grouping 145 - 1 pay_bill.
  • a respective intent value e.g., pay_bill
  • FIG. 8 is an example diagram illustrating a listing of different utterances that have been classified under a respective intent value (e.g., when_delivery) according to embodiments herein. That is, processing of each of the example utterances 110 in FIG. 8 via the speech processing system 100 would produce the candidate word grouping 145 - 4 (e.g., a candidate intent value indicating a possible intended meaning of the utterance) for the utterance. During processing, syntactic parser 115 parses a respective utterance to produce corresponding syntactic relationship information. The word extractor 140 then utilizes the respective word extraction rules 125 to identify that the respective utterance includes the respective candidate word grouping 145 - 1 when_delivery.
  • a respective intent value e.g., when_delivery
  • FIG. 9 is an example diagram illustrating statistical information 160 indicating how often a respective intent value or candidate word grouping occurs in a pool of received utterances according to embodiments herein.
  • a pool of received utterances 457 of the utterances in the received pool included the intent value (or group of words) speak_representative, 337 of the utterances in the received pool included the intent value (or group of words) cancel_delivery, 312 of the utterances in the received pool included the intent value (or group of words) place_order, etc.
  • embodiments herein can include keeping track of a frequency of occurrence for each of the different intent values for a pool of received utterances.
  • the speech processing system 100 is unsupervised and requires no training data.
  • the speech processing system 100 can collect and record the statistical information 160 over time as the speech processing system 100 receives and processes the additional utterances. Accordingly, embodiments herein include maintaining statistical information for a pool of previously received utterances. As previously discussed, the statistical information 160 indicate a frequency of receiving utterances of different intent types. The speech processing system 100 updates the statistical information 160 to reflect that the different utterances were assigned the intent values.
  • FIG. 10 is an example diagram illustrating intent values assigned to a respective subject matter label according to embodiments herein.
  • the speech processing system 100 receives the utterance “I would like to speak with a representative to change my order and cancel a delivery” in response to a query such as “How may I assist you?”
  • the syntactic parser 115 processes the received utterance to produce respective syntactic relationship information 120 for the utterance.
  • the word extractor 140 applies the word extraction rules 125 (such as those in FIG. 4 ) and syntactic relationship information 120 to identify candidate word groupings 145 such as speak_representative, change_order, and cancel_delivery as possible intended meanings of the utterance.
  • word extraction rules 125 such as those in FIG. 4
  • syntactic relationship information 120 to identify candidate word groupings 145 such as speak_representative, change_order, and cancel_delivery as possible intended meanings of the utterance.
  • the speech processing system 100 utilizes the identified syntactic relationships 120 of words to identify how the words in the received utterance are related.
  • the speech processing system 100 then initiates application of word extraction rules or pattern rules to related words in the utterance to identify locations of words and produce candidate word groupings 145 .
  • the word extraction rules 125 specifying which type of related words in the utterance to create the candidate sets of words.
  • the syntactic parser 115 produces respective syntactic relationship information 120 for the received utterance “I would like to speak with a representative to change my order and cancel a delivery”.
  • the word extractor 140 produces a first set of words (e.g., one of candidate word groupings 145 ) to include a first word and a second word in the utterance such as “speak” and “representative”, the first word “speak” is syntactically related to the second word “representative” as indicated by the identified syntactic relationships 120 as previously discussed;
  • the word extractor 140 produces a second set of words (e.g., another of candidate word groupings 145 ) to include a third word such as “change” and a fourth word “order” in the utterance, the third word “change” is syntactically related to the fourth word “order” as indicated by syntactic relationship
  • the word extractor 140 maps the combination of words including “speak” and “representative” to possible intent value speak_representative; the word extractor 140 maps the combination of words including “change” and “order” to possible intent value change_order; the word extractor 140 maps the combination of words including “cancel” and “delivery” to possible intent value cancel_delivery.
  • the received utterance potentially can be classified in a number of different classes.
  • the speech processing system 100 determines a dominant intent for assigning to the utterance based on statistical information 160 in FIG. 9 .
  • the speech processing system 100 can determine how often the word groupings in the instance utterance under test appeared in other received utterances and select.
  • the most often occurring intent value can be chosen for the instant utterance under test as the most likely intended meaning. For example, the intent value speak_representative appeared in 457 previously received utterances of a pool of received utterances, the intent value change_order appeared in 129 previously received utterances of the pool of received utterance, the intent value cancel_delivery appeared in 337 previously received utterances of a pool of received utterance.
  • the analyzer 150 selects the candidate intent value speak_representative as being the most likely dominant intent for the utterance because it occurred most often in other previously received utterances. That is, in one example embodiment, the analyzer identifies a frequency of occurrence (e.g., number of utterances in a pool that include a specific grouping of words) that utterances in a pool of previously received utterances were of a same intent type as that of a first candidate intent value for the newly received utterance; the analyzer also identifies a frequency of occurrence that utterances in the pool of previously received utterances were of a same intent type as that of the second candidate intent value for the newly received utterance; and so on. The analyzer 150 then selects an intent value for the utterance under test “I would like to speak with a representative to change my order and cancel a delivery” based on the most often occurring intent value in previous utterances.
  • a frequency of occurrence e.g., number of utterance
  • the analyzer 150 can be configured to tag the respective utterance depending on the one or more intent values (namely, speak_representative, change_order, and cancel_delivery) identified for the utterance.
  • FIG. 11 illustrates how to map a respective intent value for an utterance to a respective label.
  • FIG. 11 is an example diagram illustrating intent values and corresponding assigned subject matter labels according to embodiments herein.
  • the label map can include multiple labels for potentially tagging the received utterance.
  • Each of the labels can be assigned multiple word groupings that commonly appear in utterance that fall into the corresponding label.
  • the label “AGENT” is a candidate reserved for tagging any utterance including the word groupings speak-representative, speak_someone, talk_person, etc.;
  • the label “SKIP_A_DELIVERY” can be reserved for tagging any utterance including the word groupings skip_delivery, skip_order, hold_delivery, etc.;
  • the label “AGENT_BILLING” can be reserved for tagging any utterance including the word groupings have_bill, talk_bill, speak_billing, etc.; and so on.
  • the analyzer 150 can utilize the label map 1100 to identify how to label received utterances.
  • an utterance can be assigned one or more labels indicating a class in to which the utterance falls.
  • the example utterance which produces candidate word groupings speak_representative, change_order, and cancel_delivery can be assigned labels AGENT, CHANGE_ITEMS, AND CANCEL_DELIVERY.
  • the utterance also can be labeled with only a single label corresponding to the dominant intent value (e.g., speak_representative) such as AGENT.
  • FIG. 12 is an example diagram illustrating possible association of one or more labels to an utterance according to embodiments herein.
  • 457 utterances in a pool of previously received utterances included the intent value speak_representative.
  • the appropriate label for each of the utterances was the label “AGENT”.
  • the appropriate label for each of the utterances was the label “AGENT_BILLING”.
  • the appropriate label for respective utterance was the label “BILLING”.
  • the candidate word groupings 145 derived for an utterance can indicate that a respective utterance may fall under one or more of multiple different classes such as “AGENT” (a majority label for an utterance including speak_representative), “AGENT_BILLING” (a minority label for an utterance including speak_representative), and “BILLING” (a minority label for an utterance including speak_representative).
  • FIG. 13 is a diagram illustrating an example computer architecture for executing a speech processing system 100 according to embodiments herein.
  • Computer system 1300 can include one or more computerized devices such as a personal computer, workstation, portable computing device, console, network terminal, processing device, network device, etc., operating as a server, client, etc.
  • the speech processing application 140 - 1 can be configured to include instructions to carry out any or all of the operations associated with syntactic parser 115 , word extractor 120 , analyzer 150 , etc.
  • computer system 1300 of the present example includes an interconnect 1311 that couples computer readable storage media 1312 such as a non-transitory type of media in which digital information can be stored and retrieved, a processor 1313 , I/O interface 1314 , and a communications interface 1317 .
  • I/O interface 1314 enables receipt of utterances 110 .
  • I/O interface 1314 provides connectivity to repository 180 and, if present, other devices such as display screen, peripheral devices 316 , keyboard, computer mouse, etc.
  • Resources such as word extraction rules 125 , statistical information 160 , syntactic relationship information 120 , candidate word groupings, etc. can be stored and retrieved from repository 180 .
  • Computer readable storage medium 1312 can be any suitable device such as memory, optical storage, hard drive, floppy disk, etc. In one embodiment, the computer readable storage medium 1312 is a non-transitory storage media to store instructions and/or data.
  • Communications interface 1317 enables the computer system 1300 and processor 1313 to communicate over a network 190 to retrieve information from remote sources and communicate with other computers.
  • I/O interface 1314 enables processor 1313 to retrieve or attempt retrieval of stored information from repository 180 .
  • computer readable storage media 1312 can be encoded with speech processing application 140 - 1 (e.g., software, firmware, etc.) executed by processor 1313 .
  • speech processing application 140 - 1 e.g., software, firmware, etc.
  • processor 1313 accesses computer readable storage media 1312 via the use of interconnect 1311 in order to launch, run, execute, interpret or otherwise perform the instructions of speech processing application 140 - 1 stored on computer readable storage medium 1312 .
  • speech processing application 140 - 1 can include appropriate instructions, parsers, language models, analyzers, etc., to carry out any or all functionality associated with the speech processing system 100 as discussed herein.
  • Execution of the speech processing application 140 - 1 produces processing functionality such as speech processing process 140 - 2 in processor 1313 .
  • the speech processing process 140 - 2 associated with processor 1313 represents one or more aspects of executing speech processing application 140 - 1 within or upon the processor 1313 in the computer system 1300 .
  • the computer system 1300 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources to execute speech recognition application 140 - 1 .
  • computer system may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, netbook computer, mainframe computer system, handheld computer, workstation, network computer, application server, storage device, a consumer electronics device such as a camera, camcorder, set top box, mobile device, video game console, handheld video game device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
  • a personal computer system desktop computer, laptop, notebook, netbook computer, mainframe computer system, handheld computer, workstation, network computer, application server, storage device, a consumer electronics device such as a camera, camcorder, set top box, mobile device, video game console, handheld video game device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
  • speech processing system 100 and speech processing application 140 - 1 Functionality supported by speech processing system 100 and speech processing application 140 - 1 will now be discussed via flowcharts in FIGS. 14-16 .
  • the speech recognition system 140 can be configured to execute the steps in the flowcharts as discussed below.
  • FIG. 14 is a flowchart 1400 illustrating a general technique of implementing a speech processing system 100 and related resources according to embodiments herein.
  • step 1410 the speech processing system 100 parses an utterance 110 - 1 to identify syntactic relationships 120 - 1 amongst words in the utterance 110 - 1 .
  • step 1420 the speech processing system 100 groups or creates sets of words from the utterance 110 - 1 based on word extraction rules and the syntactic relationships of words in the utterance 110 - 1 .
  • step 1430 the speech processing system 100 maps each set of the sets of words (e.g., candidate word groupings 145 ) to a respective candidate intent value (e.g., possible intended meaning of the utterance).
  • step 1440 the speech processing system 100 produces a list including candidate intent values for each of the sets of words (e.g., candidate word groupings 145 ).
  • step 1450 the speech processing system 100 selects, from the list, a candidate intent value as being representative of an intent of the utterance.
  • FIGS. 15 and 16 combine to form a flowchart 1500 (e.g., flowchart 1500 - 1 and flowchart 1500 - 2 ) illustrating implementation of a speech processing system 100 according to embodiments herein.
  • a flowchart 1500 e.g., flowchart 1500 - 1 and flowchart 1500 - 2 .
  • step 1510 the speech-processing system 100 parses text in a received utterance 110 - 1 to identify syntactic relationships amongst words in the utterance 110 - 1 .
  • step 1515 the speech-processing system 100 groups or creates sets of words from the received utterance based on word extraction rules 125 and/or the syntactic relationships (as specified by syntactic relationship information 120 ) of words in the utterance.
  • the speech-processing system 100 utilizes the identified syntactic relationships amongst words to identify groupings (e.g., candidate word groupings 145 ) of related words in the utterance.
  • the speech-processing system 100 applies a set of word extraction rules 125 and/or patterns to the identified syntactic relationships and syntactic relationship information 120 to identify locations of words in the utterance to create the sets of words.
  • step 1530 the speech-processing system 100 maps each set of the sets of words to a respective candidate intent value.
  • the speech-processing system 100 produces a list including a candidate intent value for each of the sets of words.
  • the list includes a first candidate intent value, a second candidate intent value, and so on.
  • step 1610 the speech-processing system 100 selects, from the list, a candidate intent value as being representative of an intent of the received utterance.
  • the speech-processing system 100 identifies a frequency of occurrence that utterances in a pool of previously received utterances were of a same intent type as that of a first candidate intent value.
  • step 1620 the speech-processing system 100 identifies a frequency of occurrence that utterances in the pool of previously received utterances were of a same intent type as that of the second candidate intent value.
  • the speech-processing system 100 selects the candidate intent value for the utterance depending on which of the first candidate intent value and the second candidate intent value occurred more soften in the pool for the previously received utterances.
  • the selected candidate value indicates a dominant subject matter representative of the utterance.
  • step 1630 the speech-processing system 100 identifies a tag representative of the selected candidate intent value for the utterance.
  • step 1635 the speech-processing system 100 tags the utterance with the tag indicate a dominant subject matter intended by the utterance.
  • An algorithm as described herein, and generally, is considered to be a self-consistent sequence of operations or similar processing leading to a desired result.
  • operations or processing involve physical manipulation of physical quantities.
  • quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals or the like. It should be understood, however, that all of these and similar terms are to be associated with appropriate physical quantities and are merely convenient labels.

Abstract

According to example configurations, a speech processing system can include a syntactic parser, a word extractor, word extraction rules, and an analyzer. The syntactic parser of the speech processing system parses the utterance to identify syntactic relationships amongst words in the utterance. The word extractor utilizes word extraction rules to identify groupings of related words in the utterance that most likely represent an intended meaning of the utterance. The analyzer in the speech processing system maps each set of the sets of words produced by the word extractor to a respective candidate intent value to produce a list of candidate intent values for the utterance. The analyzer is configured to select, from the list of candidate intent values (i.e., possible intended meanings) of the utterance, a particular candidate intent value as being representative of the intent (i.e., intended meaning) of the utterance.

Description

    BACKGROUND
  • Recent developments in computers and corresponding speech recognition software algorithms have made it possible to control computer equipment via spoken input. Thus, it is now becoming more common that users are able to control their computers, electronics, personal devices, call routing, etc., via speech input.
  • Speech recognition systems are highly complex and operate by matching an acoustic signature of an utterance with acoustic signatures of words in a language model. As an example, according to conventional speech recognition systems, a microphone first converts a received acoustic signature of an uttered word into an electrical signal. An A/D (analog-to-digital) converter is typically used to convert the electrical signal into a digital representation of the uttered word. A digital signal processor converts the captured electrical signal from the time domain to the frequency domain.
  • Generally, as another part of the speech recognition process, the digital signal processor breaks down the utterance into its spectral components. Typically, the amplitude or intensity of the digital signal at various frequencies and temporal locations are then compared to a language model to determine the one or more words that were uttered.
  • Certain conventional speech recognition systems can be used for classifying utterances. For example, a conventional speech recognition system can receive and convert an utterance into respective text. In certain instances, a conventional speech recognition system can be configured to classify the utterance based on a key word in the utterance.
  • BRIEF DESCRIPTION
  • Conventional speech recognition systems can suffer from drawbacks. For example, conventional speech recognition typically analyze an utterance for presence of words that indicate a likely class to which the utterance belongs. However, the accuracy of the classification can be quite low because an utterance may include many words, making the intended meaning of the utterance difficult to determine.
  • Embodiments herein deviate with respect to conventional speech recognition systems to provide accurate speech utterance classification. For example, one embodiment herein includes a speech processing system to identify one or more intended meanings of a received utterance based on word groupings derived from words detected in the utterance. In one embodiment, the speech processing system tags the utterance with respective one or more tags indicative of one or more classes believed to represent an intended meaning of the utterance.
  • More specifically, a speech processing system according to embodiments herein can include a syntactic parser, a word extractor, word pattern rules, and an analyzer. To identify an intended general meaning of the utterance, the speech processing system performs a number of tasks on a received utterance.
  • In one embodiment, the syntactic parser of the speech processing system parses a received utterance to identify syntactic relationships amongst words in the utterance. The word extractor creates sets of words using words in the utterance based at least in part on the syntactic relationships identified by the parser. As an example, the word extractor can utilize the word pattern rules to identify groupings of related words in the utterance that most likely represent an intended meaning of the utterance.
  • In one embodiment, the pattern rules specify which type and/or location of related words in the utterance that are to be used to create the sets of words. Accordingly, embodiments herein can include creating the sets of words can include utilizing the identified syntactic relationships of words to identify groupings of related words in the utterance; and applying a set of pattern rules or word extraction rules to the identified syntactic relationships to identify types and/or locations of words in the utterance to create the sets of words.
  • Subsequent to creating the groupings of words, an analyzer in the speech processing system maps each set of the sets of words produced by the word extractor to a respective candidate intent value to produce a list of candidate intent values for the utterance. Thus, if there are multiple candidate word groupings derived form the utterance, the received utterance can be mapped multiple candidate intent values. As its name suggests, a candidate intent value is a possible intended meaning of the utterance.
  • An utterance can have one or more intended meanings; the speech processing system can be configured to identify one or more most likely intended meanings of the utterance under test. In one embodiment, the analyzer is configured to select, from the list of possible intended meanings of the utterance, a particular candidate intent value as being representative of the intent (i.e., intended meaning) of the utterance.
  • The speech processing system can be configured to maintain statistical information for a pool of previously received utterances to determine the meaning of future utterances. For example, as previously discussed, the statistical information can indicate a frequency of receiving utterances of different intent types. The statistical information can be updated over time to reflect that a most recently received and analyzed utterance was assigned the particular candidate intent value as discussed above.
  • Selection of particular intent value amongst a group of possible intents for an utterance can be based on selection criteria employed by the analyzer. For example, in one embodiment, the analyzer identifies a frequency of occurrence that utterances in a pool of previously received utterances were of a same intent type as that of a first candidate intent value for the newly received utterance; the analyzer also identifies a frequency of occurrence that utterances in the pool of previously received utterances were of a same intent type as that of the second candidate intent value for the newly received utterance; and so on. As previously discussed, the analyzer can identify multiple possible classes in which to categorizes or classify the received utterance. In one embodiment, the analyzer then selects the a particular candidate intent value for assigning to the utterance depending on which of the possible candidate intent values (e.g., first candidate intent value, second candidate intent value, etc.) occurred more often in the pool for the previously received utterances. Thus, according to one embodiment, the analyzer can perform a frequency analysis and then sort the candidate meanings. As previously discussed, the selected candidate value indicates a likely dominant subject matter or theme of the utterance.
  • In yet further embodiments, the speech processing system includes a set of tags or labels. In one embodiment, a tagging resource (such as the analyzer or other suitable resource) identifies an appropriate tag that is representative of the intent value (or selected meaning) selected for the utterance. The tagging resource then tags the utterance with the appropriate tag to indicate a likely dominant subject matter intended by words in the received utterance. Accordingly, embodiments herein can include classifying one or more received utterances using tags.
  • As discussed above and below in further embodiments, techniques herein are well suited for use in software and/or hardware applications implementing speech recognition and classification of utterances based on intended meanings. However, it should be noted that embodiments herein are not limited to use in such applications and that the techniques discussed herein are well suited for other applications as well.
  • These and other embodiments are discussed in more detail below.
  • As mentioned above, note that embodiments herein can include a configuration of one or more computerized devices, workstations, handheld or laptop computers, or the like to carry out and/or support any or all of the method operations disclosed herein. In other words, one or more computerized devices or processors can be programmed and/or configured to operate as explained herein to carry out different embodiments of the invention.
  • Yet other embodiments herein include software programs to perform the steps and operations summarized above and disclosed in detail below. One such embodiment comprises a computer program product including a non-transitory computer-readable storage medium on which software instructions are encoded for subsequent execution. The instructions, when executed in a computerized device having a processor, program and/or cause the processor to perform the operations disclosed herein. Such arrangements are typically provided as software, code, instructions, and/or other data (e.g., data structures) arranged or encoded on a non-transitory computer readable storage medium such as an optical medium (e.g., CD-ROM), floppy disk, hard disk, memory stick, etc., or other a medium such as firmware or microcode in one or more ROM, RAM, PROM, etc., or as an Application Specific Integrated Circuit (ASIC), etc. The software or firmware or other such configurations can be installed onto a computerized device to cause the computerized device to perform the techniques explained herein.
  • Accordingly, one particular embodiment of the present disclosure is directed to a computer program product that includes a computer readable storage medium having instructions stored thereon for speech recognition such as converting of an utterance to corresponding text. For example, in one embodiment, the instructions, when executed by a processor of a respective computer device, cause the processor to: parse an utterance to identify syntactic relationships amongst words in the utterance; create or group sets of words from the utterance based at least in part on the syntactic relationships; map each set of the sets of words to a respective candidate intent value; produce a list of candidate intent values for the utterance based on the mapping; and select, from the list, a candidate intent value as being representative of an intent of the utterance.
  • The ordering of the steps has been added for clarity sake. These steps can be performed in any suitable order.
  • Other embodiments of the present disclosure include software programs and/or respective hardware to perform any of the method embodiment steps and operations summarized above and disclosed in detail below.
  • It is to be understood that the system, method, apparatus, instructions on computer readable storage media, etc., as discussed herein can be embodied strictly as a software program, as a hybrid of software and hardware, or as hardware alone such as within a processor, or within an operating system or a within a software application. Example embodiments of the invention may be implemented within products and/or software applications such as those manufactured by Nuance Communications, Inc., Burlington, Mass., USA.
  • Additionally, although each of the different features, techniques, configurations, etc., herein may be discussed in different places of this disclosure, it is intended that each of the concepts can be executed independently of each other or, where suitable, the concepts can be used in combination with each other. Accordingly, the one or more present inventions as described herein can be embodied and viewed in many different ways.
  • Also, note that this preliminary discussion of embodiments herein does not specify every embodiment and/or incrementally novel aspect of the present disclosure or claimed invention(s). Instead, this brief description only presents general embodiments and corresponding points of novelty over conventional techniques. For additional details and/or possible perspectives (permutations) of the invention(s), and additional points of novelty, the reader is directed to the Detailed Description section and corresponding figures of the present disclosure as further discussed below.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is an example block diagram of a speech processing system to perform intent mining for one or more utterances according to embodiments herein.
  • FIG. 2 is an example diagram illustrating a sample utterance and corresponding relationship information generated by a syntactic parser according to embodiments herein.
  • FIG. 3 is an example diagram illustrating a sample utterance and corresponding relationship information generated by a syntactic parser according to embodiments herein.
  • FIG. 4 is an example diagram illustrating pattern or word extraction rules according to embodiments herein.
  • FIG. 5 is an example diagram illustrating a listing of different utterances that have been classified under a representative intent value according to embodiments herein.
  • FIG. 6 is an example diagram illustrating a listing of different utterances that have been classified under a representative intent value according to embodiments herein.
  • FIG. 7 is an example diagram illustrating a listing of different utterances that have been classified under a representative intent value according to embodiments herein.
  • FIG. 8 is an example diagram illustrating a listing of different utterances that have been classified under a representative intent value according to embodiments herein.
  • FIG. 9 is an example diagram illustrating sets of words extracted from utterances and corresponding frequency information of occurrence according to embodiments herein.
  • FIG. 10 is an example diagram illustrating application of intent mining to an example according to embodiments herein.
  • FIG. 11 is an example diagram illustrating intent values (e.g., representative word groupings) assigned to a respective subject matter label according to embodiments herein.
  • FIG. 12 is an example diagram illustrating intent values and corresponding assigned subject matter labels according to embodiments herein.
  • FIG. 13 is a diagram illustrating an example computer architecture for executing a speech processing system according to embodiments herein.
  • FIG. 14 is a flowchart illustrating an example method of implementing a speech processing system according to embodiments herein.
  • FIGS. 15 and 16 combine to form a flowchart illustrating an example method of implementing speech processing system according to embodiments herein.
  • The foregoing and other objects, features, and advantages of the invention will be apparent from the following more particular description of preferred embodiments herein, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, with emphasis instead being placed upon illustrating the embodiments, principles, concepts, etc.
  • DETAILED DESCRIPTION
  • According to one embodiment, a speech processing system includes a syntactic parser, a word extractor, word extraction rules, and an analyzer. The syntactic parser of the speech processing system parses the utterance to identify syntactic relationships amongst words in the utterance. In accordance with the syntactic relationship information, the word extractor utilizes the word extraction rules to identify groupings of related words in the utterance that most likely represent an intended meaning of the utterance. The analyzer in the speech processing system maps each set of the sets of words produced by the word extractor to a respective candidate intent value to produce a list of candidate intent values for the utterance. The candidate intent values represent a possible intended meaning of the utterance. The analyzer is configured to select, from the list of candidate intent values (i.e., possible intended meanings) of the utterance, a particular candidate intent value as being representative of a dominant intent of the utterance.
  • Now, referring to the figures, FIG. 1 is an example diagram illustrating a speech processing system 100 according to embodiments herein. As shown, speech processing system 100 includes a syntactic parser 115, word extractor 140, and an analyzer 150. The speech processing system 100 can receive one or more utterances 110-1, 110-2, 100-3, . . . (collectively, utterances 110) in response to a query such as “How may I assist you?”
  • As previously discussed, embodiments herein deviate with respect to conventional speech recognition systems to provide accurate speech utterance classification. For example, one embodiment herein includes a speech processing system 100 to identify one or more intended meanings of a received utterance 110. The speech processing system 100 can tag the utterance with respective one or more tags indicative of one or more classes that appear to best represent an intended meaning of the utterance 110.
  • To identify an intended general meaning of a received utterance or sequence of inputted text, the speech processing system 110 performs a number of tasks on a received utterance. For example, the syntactic parser 115 of the speech processing system 100 converts the utterance to respective text and parses the text in the utterance 110 to produce syntactic relationship information 120. The syntactic relationship information 120 indicates syntactic relationships amongst the text-based words in the utterance 110.
  • The word extractor 140 creates sets of words (e.g., candidate word groupings 145) using words in the utterance 110 based at least in part on the syntactic relationship information 120 identified by the syntactic parser 115. As an example, the word extractor 140 can utilize the word extraction rules 125 to identify one or more groupings of related words in the utterance that most likely represent an intended meaning of the utterances.
  • In one embodiment, the word extraction rules 125 includes patterns and/or templates that indicate, for example, which types of words, locations of words, etc., in the utterance will be used to create respective word groupings.
  • In accordance with further embodiments, the word extraction rules 125 specify which type and/or location of related words in the utterances are to be used to create the sets of words for a respective utterance. Accordingly, embodiments herein include creating the sets of words for a respective utterance under test can include utilizing the identified syntactic relationship information 120 to identify groupings of related words in the respective utterance; and applying a set of word extraction rules 125 to the identified syntactic relationships and utterance under test to identify types and/or locations of words in the utterance to create the sets of words.
  • Subsequent to creating one or more candidate word groupings 145 for a respective utterance, an analyzer 150 in the speech processing system 150 maps each set of the sets of words (e.g., candidate word groupings 145) produced by the word extractor 140 to a respective candidate intent value to produce a list of candidate intent values for the utterance. As its name suggests, a candidate intent value is a possible intended meaning of the utterance.
  • Note that a candidate word grouping produced by the speech processing system 100 for a respective utterance can include any number of words such as a single word or two or more words. The word extractor 140 can produce one or multiple candidate word groupings 145 for each utterance.
  • An utterance can have one or more intended meanings; the speech processing system 100 can be configured to identify one or more most likely intended meanings of an utterance under test. If there is only one word grouping and corresponding candidate intent value produced for an utterance under test, then the analyzer can assign the single intended meaning of the single candidate intent value to the utterance.
  • If analysis of an utterance produces multiple possible candidates, subsequent to determining the list of possible intended meanings (e.g., candidate intent values) of an utterance under test, the analyzer 150 applies further processing to narrow down the multiple possible intended meanings of the utterance to a at least most likely representative of the utterance. For example, in one embodiment, the analyzer is configured to select, from the list of possible intended meanings of the utterance as indicated by the candidate word groupings 145, a particular candidate intent value as being representative of the intent (i.e., intended meaning) of the utterance.
  • This technique of processing can be applied to each of the received utterances 110. Thus, in other words, the speech processing system 100 can be configured to classify the received utterances based on intended meaning.
  • The speech processing system 10 can be configured to maintain statistical information 160 for a pool of previously received utterances. For example, as previously discussed, the statistical information 160 can indicate an historical frequency of receiving utterances of different intent types. For newly received utterances and determination of respective intended meanings, the statistical information 160 can be updated over time to reflect that a most recently received and analyzed utterance was assigned a particular candidate intent value. Thus, over time, the statistical information 160 can be configured to keep track of how often the speech processing system 100 receives utterances of each particular intent type.
  • Selection of an intent value amongst a group of possible intents to assign to a newly received utterance under test can be based on selection criteria employed by the analyzer 150. In such an embodiment, the analyzer 150 selects the a particular candidate intent value for assigning to the utterance depending on which of the possible candidate intent values (e.g., first candidate intent value, second candidate intent value, etc.) occurred more often in the pool for the previously received utterances as specified by the statistical information 160. Thus, according to one embodiment, the analyzer 150 can perform a frequency analysis and then sort the candidate meanings to perform intent mining. As previously discussed, the selected candidate value indicates a likely dominant subject matter or theme of the utterance.
  • One embodiment includes so-called clustering. In such an embodiment, for each intent, the analyzer 150 compute such as respective frequency and sorts the intents. For (i=11, 12, . . . , i_n), let U={u 1, u 2, . . . , u_m} be the utterance containing i, For j in U, if the j is already not covered, then j is covered by i. Mark j covered.
  • Note that any suitable method can be implemented to perform clustering.
  • In yet further embodiments, the speech processing system 100 can include a set of tags for assigning to the utterances. In one embodiment, a tagging resource (such as the analyzer 150 or other suitable resource) identifies an appropriate tag that is representative of the intent value (or selected meaning) for an utterance. The tagging resource then tags the utterance with the appropriate tag to indicate a likely dominant subject matter intended by words in the received utterance. Accordingly, embodiments herein can include classifying one or more received utterances and then applying tags to identify a respective intended meaning.
  • FIG. 2 is an example diagram illustrating a sample utterance and corresponding relationship information 120 according to embodiments herein. Syntactic parser 115 receives the example utterance 110-1 “I would like to speak with a customer representative.” Based on parsing, the syntactic parser 115 produces syntactic relationship information 120-1 (p=pronoun, voltage=verb, a=adjective, network=noun, . . . ) as shown. By way of a non-limiting example, a Link Grammar parser can be used to parse the received utterance 110-1.
  • FIG. 3 is an example diagram illustrating a sample utterance and corresponding relationship information according to embodiments herein. Syntactic parser 115 receives the example utterance “I would like to speak with a customer service representative.” Based on parsing rules, the syntactic parser 115 produces syntactic relationship information 120-2 as shown (e.g., p=pronoun, voltage=verb, a=adjective, network=noun, . . . ).
  • FIG. 4 is an example diagram illustrating word extraction rules 125 according to embodiments herein. In one embodiment, in conjunction with the syntactic relationship information 125, the word extraction rules 125 are used to identify the location of different types of words in a received utterance that are to be used to generate respective candidate word groupings 145.
  • During operation, the word extractor 140 applies these rules to produce candidate word groupings 145. In the above example FIG. 2, the word extractor 140 applies word extraction rule 125-1 (FIG. 4) to example utterance 110-1 “I need to change my next order.” For this utterance 110-1 (FIG. 2) and using word extraction rule 125-1, Lword=“change” and Rword=“order” to produce the candidate word grouping change_order based on format Lword_Rword.
  • In the above example FIG. 3, the word extractor 140 applies word extraction rule 125-2 to example utterance 110-2 “I would like to speak with a customer representative.” For this utterance 110-2 and application of word extraction rule 125-2 FIG. 4) as shown, MVp Lword=“speak”, MVp Xword=“with”, Js Xword=“with”, Js Rword=“representative” to produce the candidate word grouping speak_representative based on format Lword_Rword.
  • As discussed below, a given utterance can include one candidate word grouping or multiple candidate word groupings.
  • FIG. 5 is an example diagram illustrating a listing of different example utterances that have been classified under a respective intent value (e.g., speak_representative) according to embodiments herein. That is, processing of each of the example utterances in FIG. 5 via the speech processing system 100 would produce the candidate word grouping 145-1 (e.g., a candidate intent value indicating a possible intended meaning of the utterance) for the respective utterance. For example, during processing, syntactic parser 115 parses a respective utterance to produce corresponding syntactic relationship information 120. The word extractor 140 then utilizes the respective word extraction rules 125 to identify that the respective utterance includes the respective candidate word grouping 145-1 speak_representative.
  • FIG. 6 is an example diagram illustrating a listing of different utterances that have been classified under a respective intent value (e.g., cancel_delivery) according to embodiments herein. That is, processing of the example utterances 110 in FIG. 6 via the speech processing system 100 would produce the candidate word grouping 145-2 (e.g., a candidate intent value indicating a possible intended meaning of the utterance) for the respective utterance. For example, during processing, syntactic parser 115 would parse the utterances to produce corresponding syntactic relationship information. The word extractor 140 then utilizes the respective word extraction rules 125 to identify that each of the utterances 110 as shown include the respective candidate word groupings 145-1.
  • FIG. 7 is an example diagram illustrating a listing of different utterances that have been classified under a respective intent value (e.g., pay_bill) according to embodiments herein. That is, processing of each of the example utterances 110 in FIG. 7 via the speech processing system 100 would produce the candidate word grouping 145-3 (e.g., a candidate intent value indicating a possible intended meaning of the utterance) for the respective utterance. During processing, syntactic parser 115 parses a respective utterance to produce corresponding syntactic relationship information. The word extractor 140 then utilizes the respective word extraction rules 125 to identify that the respective utterance includes the respective candidate word grouping 145-1 pay_bill.
  • FIG. 8 is an example diagram illustrating a listing of different utterances that have been classified under a respective intent value (e.g., when_delivery) according to embodiments herein. That is, processing of each of the example utterances 110 in FIG. 8 via the speech processing system 100 would produce the candidate word grouping 145-4 (e.g., a candidate intent value indicating a possible intended meaning of the utterance) for the utterance. During processing, syntactic parser 115 parses a respective utterance to produce corresponding syntactic relationship information. The word extractor 140 then utilizes the respective word extraction rules 125 to identify that the respective utterance includes the respective candidate word grouping 145-1 when_delivery.
  • FIG. 9 is an example diagram illustrating statistical information 160 indicating how often a respective intent value or candidate word grouping occurs in a pool of received utterances according to embodiments herein. For example, for a pool of received utterances, 457 of the utterances in the received pool included the intent value (or group of words) speak_representative, 337 of the utterances in the received pool included the intent value (or group of words) cancel_delivery, 312 of the utterances in the received pool included the intent value (or group of words) place_order, etc. Accordingly, embodiments herein can include keeping track of a frequency of occurrence for each of the different intent values for a pool of received utterances.
  • In one embodiment, the speech processing system 100 is unsupervised and requires no training data. The speech processing system 100 can collect and record the statistical information 160 over time as the speech processing system 100 receives and processes the additional utterances. Accordingly, embodiments herein include maintaining statistical information for a pool of previously received utterances. As previously discussed, the statistical information 160 indicate a frequency of receiving utterances of different intent types. The speech processing system 100 updates the statistical information 160 to reflect that the different utterances were assigned the intent values.
  • FIG. 10 is an example diagram illustrating intent values assigned to a respective subject matter label according to embodiments herein. Assume in this example that the speech processing system 100 receives the utterance “I would like to speak with a representative to change my order and cancel a delivery” in response to a query such as “How may I assist you?” In such an instance, the syntactic parser 115 processes the received utterance to produce respective syntactic relationship information 120 for the utterance.
  • In a manner as previously discussed, the word extractor 140 applies the word extraction rules 125 (such as those in FIG. 4) and syntactic relationship information 120 to identify candidate word groupings 145 such as speak_representative, change_order, and cancel_delivery as possible intended meanings of the utterance.
  • As previously discussed, the speech processing system 100 utilizes the identified syntactic relationships 120 of words to identify how the words in the received utterance are related. The speech processing system 100 then initiates application of word extraction rules or pattern rules to related words in the utterance to identify locations of words and produce candidate word groupings 145. As previously discussed, the word extraction rules 125 specifying which type of related words in the utterance to create the candidate sets of words.
  • In the example shown, the syntactic parser 115 produces respective syntactic relationship information 120 for the received utterance “I would like to speak with a representative to change my order and cancel a delivery”. By applying word extraction rules 125 to syntactic relationship information 120 and text in received utterance in a manner as previously discussed, the word extractor 140 produces a first set of words (e.g., one of candidate word groupings 145) to include a first word and a second word in the utterance such as “speak” and “representative”, the first word “speak” is syntactically related to the second word “representative” as indicated by the identified syntactic relationships 120 as previously discussed; the word extractor 140 produces a second set of words (e.g., another of candidate word groupings 145) to include a third word such as “change” and a fourth word “order” in the utterance, the third word “change” is syntactically related to the fourth word “order” as indicated by syntactic relationship information 120 for the received utterance; the word extractor 140 produces a third set of words (e.g., yet another of candidate word groupings 145) to include a fifth word such as “cancel” and a sixth word “delivery” in the utterance, the fifth word “cancel” is syntactically related to the sixth word “delivery” as indicated by syntactic relationship information 120 for the utterance.
  • As shown, the word extractor 140 maps the combination of words including “speak” and “representative” to possible intent value speak_representative; the word extractor 140 maps the combination of words including “change” and “order” to possible intent value change_order; the word extractor 140 maps the combination of words including “cancel” and “delivery” to possible intent value cancel_delivery.
  • Given these candidate word groupings 145 (namely, speak_representative, change_order, and cancel_delivery), the received utterance potentially can be classified in a number of different classes.
  • In one embodiment, the speech processing system 100 determines a dominant intent for assigning to the utterance based on statistical information 160 in FIG. 9. For example, the speech processing system 100 can determine how often the word groupings in the instance utterance under test appeared in other received utterances and select. The most often occurring intent value can be chosen for the instant utterance under test as the most likely intended meaning. For example, the intent value speak_representative appeared in 457 previously received utterances of a pool of received utterances, the intent value change_order appeared in 129 previously received utterances of the pool of received utterance, the intent value cancel_delivery appeared in 337 previously received utterances of a pool of received utterance.
  • In this example, based on the analysis, the analyzer 150 selects the candidate intent value speak_representative as being the most likely dominant intent for the utterance because it occurred most often in other previously received utterances. That is, in one example embodiment, the analyzer identifies a frequency of occurrence (e.g., number of utterances in a pool that include a specific grouping of words) that utterances in a pool of previously received utterances were of a same intent type as that of a first candidate intent value for the newly received utterance; the analyzer also identifies a frequency of occurrence that utterances in the pool of previously received utterances were of a same intent type as that of the second candidate intent value for the newly received utterance; and so on. The analyzer 150 then selects an intent value for the utterance under test “I would like to speak with a representative to change my order and cancel a delivery” based on the most often occurring intent value in previous utterances.
  • The analyzer 150 can be configured to tag the respective utterance depending on the one or more intent values (namely, speak_representative, change_order, and cancel_delivery) identified for the utterance. FIG. 11 illustrates how to map a respective intent value for an utterance to a respective label. Based on
  • FIG. 11 is an example diagram illustrating intent values and corresponding assigned subject matter labels according to embodiments herein. As shown, the label map can include multiple labels for potentially tagging the received utterance. Each of the labels can be assigned multiple word groupings that commonly appear in utterance that fall into the corresponding label. For example, the label “AGENT” is a candidate reserved for tagging any utterance including the word groupings speak-representative, speak_someone, talk_person, etc.; the label “SKIP_A_DELIVERY” can be reserved for tagging any utterance including the word groupings skip_delivery, skip_order, hold_delivery, etc.; the label “AGENT_BILLING” can be reserved for tagging any utterance including the word groupings have_bill, talk_bill, speak_billing, etc.; and so on.
  • Thus, the analyzer 150 can utilize the label map 1100 to identify how to label received utterances. Depending on the embodiment, an utterance can be assigned one or more labels indicating a class in to which the utterance falls. If desired, the example utterance which produces candidate word groupings speak_representative, change_order, and cancel_delivery can be assigned labels AGENT, CHANGE_ITEMS, AND CANCEL_DELIVERY. The utterance also can be labeled with only a single label corresponding to the dominant intent value (e.g., speak_representative) such as AGENT.
  • FIG. 12 is an example diagram illustrating possible association of one or more labels to an utterance according to embodiments herein.
  • For example, as previously discussed with respect to FIGS. 5-9, as shown in listing 1200, 457 utterances in a pool of previously received utterances included the intent value speak_representative. In 452 utterances of the 457 instances, the appropriate label for each of the utterances was the label “AGENT”. In 4 of the 457 instances, the appropriate label for each of the utterances was the label “AGENT_BILLING”. In 1 of the 457 instances, the appropriate label for respective utterance was the label “BILLING”.
  • Accordingly, the candidate word groupings 145 derived for an utterance can indicate that a respective utterance may fall under one or more of multiple different classes such as “AGENT” (a majority label for an utterance including speak_representative), “AGENT_BILLING” (a minority label for an utterance including speak_representative), and “BILLING” (a minority label for an utterance including speak_representative).
  • FIG. 13 is a diagram illustrating an example computer architecture for executing a speech processing system 100 according to embodiments herein.
  • Computer system 1300 can include one or more computerized devices such as a personal computer, workstation, portable computing device, console, network terminal, processing device, network device, etc., operating as a server, client, etc. The speech processing application 140-1 can be configured to include instructions to carry out any or all of the operations associated with syntactic parser 115, word extractor 120, analyzer 150, etc.
  • Note that the following discussion provides a basic embodiment indicating how to execute aspects of speech processing system 100 according to embodiments herein. However, it should be noted that the actual configuration for carrying out the operations as described herein can vary depending on a respective application.
  • As shown, computer system 1300 of the present example includes an interconnect 1311 that couples computer readable storage media 1312 such as a non-transitory type of media in which digital information can be stored and retrieved, a processor 1313, I/O interface 1314, and a communications interface 1317.
  • I/O interface 1314 enables receipt of utterances 110. I/O interface 1314 provides connectivity to repository 180 and, if present, other devices such as display screen, peripheral devices 316, keyboard, computer mouse, etc. Resources such as word extraction rules 125, statistical information 160, syntactic relationship information 120, candidate word groupings, etc. can be stored and retrieved from repository 180.
  • Computer readable storage medium 1312 can be any suitable device such as memory, optical storage, hard drive, floppy disk, etc. In one embodiment, the computer readable storage medium 1312 is a non-transitory storage media to store instructions and/or data.
  • Communications interface 1317 enables the computer system 1300 and processor 1313 to communicate over a network 190 to retrieve information from remote sources and communicate with other computers. I/O interface 1314 enables processor 1313 to retrieve or attempt retrieval of stored information from repository 180.
  • As shown, computer readable storage media 1312 can be encoded with speech processing application 140-1 (e.g., software, firmware, etc.) executed by processor 1313.
  • During operation of one embodiment, processor 1313 accesses computer readable storage media 1312 via the use of interconnect 1311 in order to launch, run, execute, interpret or otherwise perform the instructions of speech processing application 140-1 stored on computer readable storage medium 1312. As previously discussed, speech processing application 140-1 can include appropriate instructions, parsers, language models, analyzers, etc., to carry out any or all functionality associated with the speech processing system 100 as discussed herein.
  • Execution of the speech processing application 140-1 produces processing functionality such as speech processing process 140-2 in processor 1313. In other words, the speech processing process 140-2 associated with processor 1313 represents one or more aspects of executing speech processing application 140-1 within or upon the processor 1313 in the computer system 1300.
  • Those skilled in the art will understand that the computer system 1300 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources to execute speech recognition application 140-1.
  • In accordance with different embodiments, note that computer system may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, netbook computer, mainframe computer system, handheld computer, workstation, network computer, application server, storage device, a consumer electronics device such as a camera, camcorder, set top box, mobile device, video game console, handheld video game device, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device.
  • Functionality supported by speech processing system 100 and speech processing application 140-1 will now be discussed via flowcharts in FIGS. 14-16. As discussed above, the speech recognition system 140 can be configured to execute the steps in the flowcharts as discussed below.
  • Note that there will be some overlap with respect to concepts discussed above. Also, note that the steps in the below flowcharts need not always be executed in the order shown. That is, the steps can be executed in any suitable order.
  • FIG. 14 is a flowchart 1400 illustrating a general technique of implementing a speech processing system 100 and related resources according to embodiments herein.
  • In step 1410, the speech processing system 100 parses an utterance 110-1 to identify syntactic relationships 120-1 amongst words in the utterance 110-1.
  • In step 1420, the speech processing system 100 groups or creates sets of words from the utterance 110-1 based on word extraction rules and the syntactic relationships of words in the utterance 110-1.
  • In step 1430, the speech processing system 100 maps each set of the sets of words (e.g., candidate word groupings 145) to a respective candidate intent value (e.g., possible intended meaning of the utterance).
  • In step 1440, the speech processing system 100 produces a list including candidate intent values for each of the sets of words (e.g., candidate word groupings 145).
  • In step 1450, the speech processing system 100 selects, from the list, a candidate intent value as being representative of an intent of the utterance.
  • FIGS. 15 and 16 combine to form a flowchart 1500 (e.g., flowchart 1500-1 and flowchart 1500-2) illustrating implementation of a speech processing system 100 according to embodiments herein.
  • In step 1510, the speech-processing system 100 parses text in a received utterance 110-1 to identify syntactic relationships amongst words in the utterance 110-1.
  • In step 1515, the speech-processing system 100 groups or creates sets of words from the received utterance based on word extraction rules 125 and/or the syntactic relationships (as specified by syntactic relationship information 120) of words in the utterance.
  • In sub-step 1520, the speech-processing system 100 utilizes the identified syntactic relationships amongst words to identify groupings (e.g., candidate word groupings 145) of related words in the utterance.
  • In sub-step 1525, the speech-processing system 100 applies a set of word extraction rules 125 and/or patterns to the identified syntactic relationships and syntactic relationship information 120 to identify locations of words in the utterance to create the sets of words.
  • In step 1530, the speech-processing system 100 maps each set of the sets of words to a respective candidate intent value.
  • In step 1535, the speech-processing system 100 produces a list including a candidate intent value for each of the sets of words. In one embodiment, the list includes a first candidate intent value, a second candidate intent value, and so on.
  • In step 1610, the speech-processing system 100 selects, from the list, a candidate intent value as being representative of an intent of the received utterance.
  • In sub-step 1615, the speech-processing system 100 identifies a frequency of occurrence that utterances in a pool of previously received utterances were of a same intent type as that of a first candidate intent value.
  • In step 1620, the speech-processing system 100 identifies a frequency of occurrence that utterances in the pool of previously received utterances were of a same intent type as that of the second candidate intent value.
  • In step 1625, the speech-processing system 100 selects the candidate intent value for the utterance depending on which of the first candidate intent value and the second candidate intent value occurred more soften in the pool for the previously received utterances. The selected candidate value indicates a dominant subject matter representative of the utterance.
  • In step 1630, the speech-processing system 100 identifies a tag representative of the selected candidate intent value for the utterance.
  • In step 1635, the speech-processing system 100 tags the utterance with the tag indicate a dominant subject matter intended by the utterance.
  • As discussed above, techniques herein are well suited for use in software and/or hardware applications implementing speech recognition and classification of utterances based on intended meanings. However, it should be noted that embodiments herein are not limited to use in such applications and that the techniques discussed herein are well suited for other applications
  • Based on the description set forth herein, numerous specific details have been set forth to provide a thorough understanding of claimed subject matter. However, it will be understood by those skilled in the art that claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, systems, etc., that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter. Some portions of the detailed description have been presented in terms of algorithms or symbolic representations of operations on data bits or binary digital signals stored within a computing system memory, such as a computer memory. These algorithmic descriptions or representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. An algorithm as described herein, and generally, is considered to be a self-consistent sequence of operations or similar processing leading to a desired result. In this context, operations or processing involve physical manipulation of physical quantities. Typically, although not necessarily, such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals or the like. It should be understood, however, that all of these and similar terms are to be associated with appropriate physical quantities and are merely convenient labels. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a computing platform, such as a computer or a similar electronic computing device, that manipulates or transforms data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
  • While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application as defined b the appended claims. Such variations are intended to be covered by the scope of this present application. As such, the foregoing description of embodiments of the present application is not intended to be limiting. Rather, any limitations to the invention are presented in the following claims.

Claims (2)

1. A method comprising:
parsing an utterance to identify syntactic relationships amongst words in the utterance;
creating sets of words from the utterance based on the syntactic relationships;
mapping each set of the sets of words to a respective candidate intent value to produce a list of candidate intent values for the utterance; and
selecting, from the list, a candidate intent value as being representative of an intent of the utterance.
2-23. (canceled)
US14/184,379 2011-02-28 2014-02-19 Intent mining via analysis of utterances Abandoned US20140180692A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/184,379 US20140180692A1 (en) 2011-02-28 2014-02-19 Intent mining via analysis of utterances

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/037,114 US8688453B1 (en) 2011-02-28 2011-02-28 Intent mining via analysis of utterances
US14/184,379 US20140180692A1 (en) 2011-02-28 2014-02-19 Intent mining via analysis of utterances

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/037,114 Continuation US8688453B1 (en) 2011-02-28 2011-02-28 Intent mining via analysis of utterances

Publications (1)

Publication Number Publication Date
US20140180692A1 true US20140180692A1 (en) 2014-06-26

Family

ID=50348938

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/037,114 Active 2031-09-21 US8688453B1 (en) 2011-02-28 2011-02-28 Intent mining via analysis of utterances
US14/184,379 Abandoned US20140180692A1 (en) 2011-02-28 2014-02-19 Intent mining via analysis of utterances

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/037,114 Active 2031-09-21 US8688453B1 (en) 2011-02-28 2011-02-28 Intent mining via analysis of utterances

Country Status (1)

Country Link
US (2) US8688453B1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8983840B2 (en) * 2012-06-19 2015-03-17 International Business Machines Corporation Intent discovery in audio or text-based conversation
US20180203833A1 (en) * 2016-11-04 2018-07-19 Semantic Machines, Inc. Data collection for a new conversational dialogue system
US10586530B2 (en) 2017-02-23 2020-03-10 Semantic Machines, Inc. Expandable dialogue system
US20200142719A1 (en) * 2018-11-02 2020-05-07 International Business Machines Corporation Automatic generation of chatbot meta communication
US10713288B2 (en) 2017-02-08 2020-07-14 Semantic Machines, Inc. Natural language content generator
US10762892B2 (en) 2017-02-23 2020-09-01 Semantic Machines, Inc. Rapid deployment of dialogue system
US11069340B2 (en) 2017-02-23 2021-07-20 Microsoft Technology Licensing, Llc Flexible and expandable dialogue system
US11132499B2 (en) 2017-08-28 2021-09-28 Microsoft Technology Licensing, Llc Robust expandable dialogue system
CN113657120A (en) * 2021-08-23 2021-11-16 深圳科卫机器人科技有限公司 Human-computer interaction intention analysis method and device, computer equipment and storage medium
US11182565B2 (en) * 2018-02-23 2021-11-23 Samsung Electronics Co., Ltd. Method to learn personalized intents
US11314940B2 (en) 2018-05-22 2022-04-26 Samsung Electronics Co., Ltd. Cross domain personalized vocabulary learning in intelligent assistants

Families Citing this family (159)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US20120311585A1 (en) 2011-06-03 2012-12-06 Apple Inc. Organizing task items that represent tasks to perform
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9424840B1 (en) * 2012-08-31 2016-08-23 Amazon Technologies, Inc. Speech recognition platforms
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
KR20230137475A (en) 2013-02-07 2023-10-04 애플 인크. Voice trigger for a digital assistant
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US10748529B1 (en) 2013-03-15 2020-08-18 Apple Inc. Voice activated device for use with a voice-based digital assistant
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
EP3937002A1 (en) 2013-06-09 2022-01-12 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
EP3480811A1 (en) 2014-05-30 2019-05-08 Apple Inc. Multi-command single utterance input method
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9842593B2 (en) 2014-11-14 2017-12-12 At&T Intellectual Property I, L.P. Multi-level content analysis and response
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
WO2016151692A1 (en) * 2015-03-20 2016-09-29 株式会社 東芝 Tagging support device, method and program
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US10200824B2 (en) 2015-05-27 2019-02-05 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
US10387569B2 (en) * 2015-08-28 2019-08-20 Freedom Solutions Group, Llc Automated document analysis comprising a user interface based on content types
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10331312B2 (en) 2015-09-08 2019-06-25 Apple Inc. Intelligent automated assistant in a media environment
US10740384B2 (en) 2015-09-08 2020-08-11 Apple Inc. Intelligent automated assistant for media search and playback
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10956666B2 (en) 2015-11-09 2021-03-23 Apple Inc. Unconventional virtual assistant interactions
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049670B2 (en) * 2016-06-06 2018-08-14 Google Llc Providing voice action discoverability example for trigger term
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
US9961200B1 (en) * 2017-03-28 2018-05-01 Bank Of America Corporation Derived intent collision detection for use in a multi-intent matrix
US20180288230A1 (en) * 2017-03-29 2018-10-04 International Business Machines Corporation Intention detection and handling of incoming calls
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. User interface for correcting recognition errors
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
DK180048B1 (en) 2017-05-11 2020-02-04 Apple Inc. MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
DK201770428A1 (en) 2017-05-12 2019-02-18 Apple Inc. Low-latency intelligent automated assistant
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US20180336275A1 (en) 2017-05-16 2018-11-22 Apple Inc. Intelligent automated assistant for media exploration
US20180336892A1 (en) 2017-05-16 2018-11-22 Apple Inc. Detecting a trigger of a digital assistant
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
KR102509821B1 (en) 2017-09-18 2023-03-14 삼성전자주식회사 Method and apparatus for generating oos(out-of-service) sentence
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10684950B2 (en) 2018-03-15 2020-06-16 Bank Of America Corporation System for triggering cross channel data caching
US10497366B2 (en) * 2018-03-23 2019-12-03 Servicenow, Inc. Hybrid learning system for natural language understanding
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10665228B2 (en) 2018-05-23 2020-05-26 Bank of America Corporaiton Quantum technology for use with extracting intents from linguistics
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. Virtual assistant operation in multi-device environments
DK179822B1 (en) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US11076039B2 (en) 2018-06-03 2021-07-27 Apple Inc. Accelerated task performance
US10685645B2 (en) * 2018-08-09 2020-06-16 Bank Of America Corporation Identification of candidate training utterances from human conversations with an intelligent interactive assistant
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
CN109657229A (en) * 2018-10-31 2019-04-19 北京奇艺世纪科技有限公司 A kind of intention assessment model generating method, intension recognizing method and device
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
US11715467B2 (en) * 2019-04-17 2023-08-01 Tempus Labs, Inc. Collaborative artificial intelligence method and system
US20200342874A1 (en) * 2019-04-26 2020-10-29 Oracle International Corporation Handling explicit invocation of chatbots
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11217251B2 (en) 2019-05-06 2022-01-04 Apple Inc. Spoken notifications
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
DK201970510A1 (en) 2019-05-31 2021-02-11 Apple Inc Voice identification in digital assistant systems
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
DK180129B1 (en) 2019-05-31 2020-06-02 Apple Inc. User activity shortcut suggestions
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11227599B2 (en) 2019-06-01 2022-01-18 Apple Inc. Methods and user interfaces for voice-based control of electronic devices
WO2021056255A1 (en) 2019-09-25 2021-04-01 Apple Inc. Text detection using global geometry estimators
US11038934B1 (en) 2020-05-11 2021-06-15 Apple Inc. Digital assistant hardware abstraction
US11061543B1 (en) 2020-05-11 2021-07-13 Apple Inc. Providing relevant data items based on context
US11755276B2 (en) 2020-05-12 2023-09-12 Apple Inc. Reducing description length based on confidence
US11490204B2 (en) 2020-07-20 2022-11-01 Apple Inc. Multi-device audio adjustment coordination
US11438683B2 (en) 2020-07-21 2022-09-06 Apple Inc. User identification using headphones
US11514897B2 (en) * 2020-09-25 2022-11-29 Genesys Telecommunications Laboratories, Inc. Systems and methods relating to bot authoring by mining intents from natural language conversations
US11265396B1 (en) 2020-10-01 2022-03-01 Bank Of America Corporation System for cross channel data caching for performing electronic activities
US11735207B1 (en) * 2021-09-30 2023-08-22 Wells Fargo Bank, N.A. Systems and methods for determining a next action based on weighted predicted emotions, entities, and intents
CN113742027B (en) * 2021-11-05 2022-07-15 深圳传音控股股份有限公司 Interaction method, intelligent terminal and readable storage medium
US11880307B2 (en) 2022-06-25 2024-01-23 Bank Of America Corporation Systems and methods for dynamic management of stored cache data based on predictive usage information

Citations (150)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5077804A (en) * 1990-12-11 1991-12-31 Richard Dnaiel D Telecommunications device and related method
US5278980A (en) * 1991-08-16 1994-01-11 Xerox Corporation Iterative technique for phrase query formation and an information retrieval system employing same
US5418717A (en) * 1990-08-27 1995-05-23 Su; Keh-Yih Multiple score language processing system
US5457768A (en) * 1991-08-13 1995-10-10 Kabushiki Kaisha Toshiba Speech recognition apparatus using syntactic and semantic analysis
US5597312A (en) * 1994-05-04 1997-01-28 U S West Technologies, Inc. Intelligent tutoring method and system
US5625748A (en) * 1994-04-18 1997-04-29 Bbn Corporation Topic discriminator using posterior probability or confidence scores
US5671329A (en) * 1993-03-09 1997-09-23 Nec Corporation Speech dialogue system in which a recognition and understanding process, application process, and voice input response are performed simultaneously with voice input
US5867817A (en) * 1996-08-19 1999-02-02 Virtual Vision, Inc. Speech recognition manager
US5937384A (en) * 1996-05-01 1999-08-10 Microsoft Corporation Method and system for speech recognition using continuous density hidden Markov models
US5966686A (en) * 1996-06-28 1999-10-12 Microsoft Corporation Method and system for computing semantic logical forms from syntax trees
US6178398B1 (en) * 1997-11-18 2001-01-23 Motorola, Inc. Method, device and system for noise-tolerant language understanding
US6185531B1 (en) * 1997-01-09 2001-02-06 Gte Internetworking Incorporated Topic indexing method
US6223150B1 (en) * 1999-01-29 2001-04-24 Sony Corporation Method and apparatus for parsing in a spoken language translation system
US6236968B1 (en) * 1998-05-14 2001-05-22 International Business Machines Corporation Sleep prevention dialog based car system
US6243669B1 (en) * 1999-01-29 2001-06-05 Sony Corporation Method and apparatus for providing syntactic analysis and data structure for translation knowledge in example-based language translation
US20010047262A1 (en) * 2000-02-04 2001-11-29 Alexander Kurganov Robust voice browser system and voice activated device controller
US20020002454A1 (en) * 1998-12-07 2002-01-03 Srinivas Bangalore Automatic clustering of tokens from a corpus for grammar acquisition
US20020029304A1 (en) * 2000-06-06 2002-03-07 Microsoft Corporation Method and system for defining semantic categories and actions
US20020032561A1 (en) * 2000-09-11 2002-03-14 Nec Corporation Automatic interpreting system, automatic interpreting method, and program for automatic interpreting
US20020059066A1 (en) * 1998-04-08 2002-05-16 O'hagan Timothy P. Speech recognition system and method for employing the same
US6442524B1 (en) * 1999-01-29 2002-08-27 Sony Corporation Analyzing inflectional morphology in a spoken language translation system
US20020128821A1 (en) * 1999-05-28 2002-09-12 Farzad Ehsani Phrase-based dialogue modeling with particular application to creating recognition grammars for voice-controlled user interfaces
US20020169596A1 (en) * 2001-05-04 2002-11-14 Brill Eric D. Method and apparatus for unsupervised training of natural language processing units
US6519562B1 (en) * 1999-02-25 2003-02-11 Speechworks International, Inc. Dynamic semantic control of a speech recognition system
US20030033333A1 (en) * 2001-05-11 2003-02-13 Fujitsu Limited Hot topic extraction apparatus and method, storage medium therefor
US20030105638A1 (en) * 2001-11-27 2003-06-05 Taira Rick K. Method and system for creating computer-understandable structured medical data from natural language reports
US20030108334A1 (en) * 2001-12-06 2003-06-12 Koninklijke Philips Elecronics N.V. Adaptive environment system and method of providing an adaptive environment
US20030163321A1 (en) * 2000-06-16 2003-08-28 Mault James R Speech recognition capability for a personal digital assistant
US20030233224A1 (en) * 2001-08-14 2003-12-18 Insightful Corporation Method and system for enhanced data searching
US6697793B2 (en) * 2001-03-02 2004-02-24 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration System, method and apparatus for generating phrases from a database
US20040059577A1 (en) * 2002-06-28 2004-03-25 International Business Machines Corporation Method and apparatus for preparing a document to be read by a text-to-speech reader
US20040078190A1 (en) * 2000-09-29 2004-04-22 Fass Daniel C Method and system for describing and identifying concepts in natural language text for information retrieval and processing
US6766320B1 (en) * 2000-08-24 2004-07-20 Microsoft Corporation Search engine with natural language-based robust parsing for user query and relevance feedback learning
US20040148170A1 (en) * 2003-01-23 2004-07-29 Alejandro Acero Statistical classifiers for spoken language understanding and command/control scenarios
US20040158558A1 (en) * 2002-11-26 2004-08-12 Atsuko Koizumi Information processor and program for implementing information processor
US6816831B1 (en) * 1999-10-28 2004-11-09 Sony Corporation Language learning apparatus and method therefor
US20050049867A1 (en) * 2003-08-11 2005-03-03 Paul Deane Cooccurrence and constructions
US20050055209A1 (en) * 2003-09-05 2005-03-10 Epstein Mark E. Semantic language modeling and confidence measurement
US20050080629A1 (en) * 2002-01-18 2005-04-14 David Attwater Multi-mode interactive dialogue apparatus and method
US20050105712A1 (en) * 2003-02-11 2005-05-19 Williams David R. Machine learning
US20050144000A1 (en) * 2003-12-26 2005-06-30 Kabushiki Kaisha Toshiba Contents providing apparatus and method
US6937975B1 (en) * 1998-10-08 2005-08-30 Canon Kabushiki Kaisha Apparatus and method for processing natural language
US6961954B1 (en) * 1997-10-27 2005-11-01 The Mitre Corporation Automated segmentation, information extraction, summarization, and presentation of broadcast news
US7006973B1 (en) * 2000-01-31 2006-02-28 Intel Corporation Providing information in response to spoken requests
US20060056602A1 (en) * 2004-09-13 2006-03-16 Sbc Knowledge Ventures, L.P. System and method for analysis and adjustment of speech-enabled systems
US20060074671A1 (en) * 2004-10-05 2006-04-06 Gary Farmaner System and methods for improving accuracy of speech recognition
US20060080098A1 (en) * 2004-09-30 2006-04-13 Nick Campbell Apparatus and method for speech processing using paralinguistic information in vector form
US20060106596A1 (en) * 2000-07-20 2006-05-18 Microsoft Corporation Ranking Parser for a Natural Language Processing System
US20060129397A1 (en) * 2004-12-10 2006-06-15 Microsoft Corporation System and method for identifying semantic intent from acoustic information
US7085708B2 (en) * 2000-09-23 2006-08-01 Ravenflow, Inc. Computer system with natural language to machine language translator
US20060173686A1 (en) * 2005-02-01 2006-08-03 Samsung Electronics Co., Ltd. Apparatus, method, and medium for generating grammar network for use in speech recognition and dialogue speech recognition
US20060173683A1 (en) * 2005-02-03 2006-08-03 Voice Signal Technologies, Inc. Methods and apparatus for automatically extending the voice vocabulary of mobile communications devices
US7089226B1 (en) * 2001-06-28 2006-08-08 Microsoft Corporation System, representation, and method providing multilevel information retrieval with clarification dialog
US20060197764A1 (en) * 2005-03-02 2006-09-07 Yang George L Document animation system
US20060235843A1 (en) * 2005-01-31 2006-10-19 Textdigger, Inc. Method and system for semantic search and retrieval of electronic documents
US20060259299A1 (en) * 2003-01-15 2006-11-16 Yumiko Kato Broadcast reception method, broadcast reception systm, recording medium and program (as amended)
US20060259294A1 (en) * 2002-12-16 2006-11-16 John Tashereau Voice recognition system and method
US20070005369A1 (en) * 2005-06-30 2007-01-04 Microsoft Corporation Dialog analysis
US7177817B1 (en) * 2002-12-12 2007-02-13 Tuvox Incorporated Automatic generation of voice content for a voice response system
US20070118374A1 (en) * 2005-11-23 2007-05-24 Wise Gerald B Method for generating closed captions
US20070136048A1 (en) * 2005-12-13 2007-06-14 David Richardson-Bunbury System for classifying words
US20070156747A1 (en) * 2005-12-12 2007-07-05 Tegic Communications Llc Mobile Device Retrieval and Navigation
US20070174057A1 (en) * 2000-01-31 2007-07-26 Genly Christopher H Providing programming information in response to spoken requests
US20070225980A1 (en) * 2006-03-24 2007-09-27 Kabushiki Kaisha Toshiba Apparatus, method and computer program product for recognizing speech
US7313523B1 (en) * 2003-05-14 2007-12-25 Apple Inc. Method and apparatus for assigning word prominence to new or previous information in speech synthesis
US20080046244A1 (en) * 2004-11-30 2008-02-21 Yoshio Ohno Speech Recognition Device
US20080071536A1 (en) * 2006-09-15 2008-03-20 Honda Motor Co., Ltd. Voice recognition device, voice recognition method, and voice recognition program
US20080140389A1 (en) * 2006-12-06 2008-06-12 Honda Motor Co., Ltd. Language understanding apparatus, language understanding method, and computer program
US20080141125A1 (en) * 2006-06-23 2008-06-12 Firooz Ghassabian Combined data entry systems
US20080154870A1 (en) * 2006-12-26 2008-06-26 Voice Signal Technologies, Inc. Collection and use of side information in voice-mediated mobile search
US20080201136A1 (en) * 2007-02-19 2008-08-21 Kabushiki Kaisha Toshiba Apparatus and Method for Speech Recognition
US20080221903A1 (en) * 2005-08-31 2008-09-11 International Business Machines Corporation Hierarchical Methods and Apparatus for Extracting User Intent from Spoken Utterances
US20080243820A1 (en) * 2007-03-27 2008-10-02 Walter Chang Semantic analysis documents to rank terms
US20080319748A1 (en) * 2006-01-31 2008-12-25 Mikio Nakano Conversation System and Conversation Software
US20090006343A1 (en) * 2007-06-28 2009-01-01 Microsoft Corporation Machine assisted query formulation
US20090006345A1 (en) * 2007-06-28 2009-01-01 Microsoft Corporation Voice-based search processing
US20090063426A1 (en) * 2007-08-31 2009-03-05 Powerset, Inc. Identification of semantic relationships within reported speech
US20090077047A1 (en) * 2006-08-14 2009-03-19 Inquira, Inc. Method and apparatus for identifying and classifying query intent
US20090094233A1 (en) * 2007-10-05 2009-04-09 Fujitsu Limited Modeling Topics Using Statistical Distributions
US20090150156A1 (en) * 2007-12-11 2009-06-11 Kennewick Michael R System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US20090209345A1 (en) * 2008-02-14 2009-08-20 Aruze Gaming America, Inc. Multiplayer participation type gaming system limiting dialogue voices outputted from gaming machine
US20090259650A1 (en) * 2008-04-11 2009-10-15 Ebay Inc. System and method for identification of near duplicate user-generated content
US20090276419A1 (en) * 2008-05-01 2009-11-05 Chacha Search Inc. Method and system for improvement of request processing
US20090285474A1 (en) * 2008-05-15 2009-11-19 Berteau Stefan A System and Method for Bayesian Text Classification
US20090313227A1 (en) * 2008-06-14 2009-12-17 Veoh Networks, Inc. Searching Using Patterns of Usage
US20090327260A1 (en) * 2008-06-25 2009-12-31 Microsoft Corporation Constructing a classifier for classifying queries
US20100023506A1 (en) * 2008-07-22 2010-01-28 Yahoo! Inc. Augmenting online content with additional content relevant to user interests
US20100057687A1 (en) * 2008-09-04 2010-03-04 Microsoft Corporation Predicting future queries from log data
US20100094854A1 (en) * 2008-10-14 2010-04-15 Omid Rouhani-Kalleh System for automatically categorizing queries
US20100114908A1 (en) * 2008-11-04 2010-05-06 Microsoft Corporation Relevant navigation with deep links into query
US20100121840A1 (en) * 2008-11-12 2010-05-13 Yahoo! Inc. Query difficulty estimation
US20100131835A1 (en) * 2008-11-22 2010-05-27 Srihari Kumar System and methods for inferring intent of website visitors and generating and packaging visitor information for distribution as sales leads or market intelligence
US20100138402A1 (en) * 2008-12-02 2010-06-03 Chacha Search, Inc. Method and system for improving utilization of human searchers
US20100145710A1 (en) * 2008-12-08 2010-06-10 Nuance Communications, Inc. Data-Driven Voice User Interface
US7739103B2 (en) * 2004-04-06 2010-06-15 Educational Testing Service Lexical association metric for knowledge-free extraction of phrasal terms
US20100153106A1 (en) * 2008-12-15 2010-06-17 Verizon Data Services Llc Conversation mapping
US20100153317A1 (en) * 2008-12-11 2010-06-17 Samsung Electronics Co., Ltd Intelligent robot and control method thereof
US20100174716A1 (en) * 2004-09-30 2010-07-08 Google Inc. Methods and systems for improving text segmentation
US20100228762A1 (en) * 2009-03-05 2010-09-09 Mauge Karin System and method to provide query linguistic service
US20100268536A1 (en) * 2009-04-17 2010-10-21 David Suendermann System and method for improving performance of semantic classifiers in spoken dialog systems
US20100293174A1 (en) * 2009-05-12 2010-11-18 Microsoft Corporation Query classification
US20110004462A1 (en) * 2009-07-01 2011-01-06 Comcast Interactive Media, Llc Generating Topic-Specific Language Models
US20110010367A1 (en) * 2009-06-11 2011-01-13 Chacha Search, Inc. Method and system of providing a search tool
US20110029311A1 (en) * 2009-07-30 2011-02-03 Sony Corporation Voice processing device and method, and program
US20110029533A1 (en) * 2009-07-28 2011-02-03 Prasantha Jayakody Method and system for tag suggestion in a tag-associated data-object storage system
US7912702B2 (en) * 1999-11-12 2011-03-22 Phoenix Solutions, Inc. Statistical language model trained with semantic variants
US7912701B1 (en) * 2005-05-04 2011-03-22 IgniteIP Capital IA Special Management LLC Method and apparatus for semiotic correlation
US20110099003A1 (en) * 2009-10-28 2011-04-28 Masaaki Isozu Information processing apparatus, information processing method, and program
US7983896B2 (en) * 2004-03-05 2011-07-19 SDL Language Technology In-context exact (ICE) matching
US20110184730A1 (en) * 2010-01-22 2011-07-28 Google Inc. Multi-dimensional disambiguation of voice commands
US20110191423A1 (en) * 2010-01-29 2011-08-04 Mcafee, Inc. Reputation management for network content classification
US7996219B2 (en) * 2005-03-21 2011-08-09 At&T Intellectual Property Ii, L.P. Apparatus and method for model adaptation for spoken language understanding
US20110213777A1 (en) * 2010-02-01 2011-09-01 Alibaba Group Holding Limited Method and Apparatus of Text Classification
US20110225019A1 (en) * 2008-10-14 2011-09-15 David Taylor Search, analysis and categorization
US20110238410A1 (en) * 2010-03-26 2011-09-29 Jean-Marie Henri Daniel Larcheveque Semantic Clustering and User Interfaces
US20110238408A1 (en) * 2010-03-26 2011-09-29 Jean-Marie Henri Daniel Larcheveque Semantic Clustering
US20110258204A1 (en) * 2007-01-19 2011-10-20 Wordnetworks, Inc. System for using keyword phrases on a page to provide contextually relevant content to users
US20110282913A1 (en) * 2009-04-30 2011-11-17 Oki Electric Industry Co., Ltd. Dialogue control system, method and computer readable storage medium, and multidimensional ontology processing system, method and computer readable storage medium
US20110288868A1 (en) * 2010-05-19 2011-11-24 Lloyd Matthew I Disambiguation of contact information using historical data
US20110301943A1 (en) * 2007-05-17 2011-12-08 Redstart Systems, Inc. System and method of dictation for a speech recognition command system
US20110301955A1 (en) * 2010-06-07 2011-12-08 Google Inc. Predicting and Learning Carrier Phrases for Speech Input
US20110314012A1 (en) * 2010-06-16 2011-12-22 Microsoft Corporation Determining query intent
US20110314390A1 (en) * 2010-06-18 2011-12-22 Microsoft Corporation Techniques to dynamically modify themes based on messaging
US20120016873A1 (en) * 2010-07-16 2012-01-19 Michael Mathieson Method and system for ranking search results based on categories
US20120053927A1 (en) * 2010-09-01 2012-03-01 Microsoft Corporation Identifying topically-related phrases in a browsing sequence
US20120078919A1 (en) * 2010-09-29 2012-03-29 Fujitsu Limited Comparison of character strings
US20120084291A1 (en) * 2010-09-30 2012-04-05 Microsoft Corporation Applying search queries to content sets
US20120096033A1 (en) * 2010-10-14 2012-04-19 Microsoft Corporation Disambiguation of Entities
US20120158693A1 (en) * 2010-12-17 2012-06-21 Yahoo! Inc. Method and system for generating web pages for topics unassociated with a dominant url
US20120158703A1 (en) * 2010-12-16 2012-06-21 Microsoft Corporation Search lexicon expansion
US20120191453A1 (en) * 2010-04-26 2012-07-26 Cyberpulse L.L.C. System and methods for matching an utterance to a template hierarchy
US8255383B2 (en) * 2006-07-14 2012-08-28 Chacha Search, Inc Method and system for qualifying keywords in query strings
US8311807B2 (en) * 2004-11-09 2012-11-13 Samsung Electronics Co., Ltd. Periodically extracting and evaluating frequency of occurrence data of unregistered terms in a document for updating a dictionary
US20120290293A1 (en) * 2011-05-13 2012-11-15 Microsoft Corporation Exploiting Query Click Logs for Domain Detection in Spoken Language Understanding
US8321220B1 (en) * 2005-11-30 2012-11-27 At&T Intellectual Property Ii, L.P. System and method of semi-supervised learning for spoken language understanding using semantic role labeling
US20120310628A1 (en) * 2007-04-25 2012-12-06 Samsung Electronics Co., Ltd. Method and system for providing access to information of potential interest to a user
US20120317107A1 (en) * 2007-10-11 2012-12-13 Google Inc. Methods and Systems for Classifying Search Results to Determine Page Elements
US20130006995A1 (en) * 2009-12-10 2013-01-03 Chesterdeal Limited Accessing stored electronic resources
US8370352B2 (en) * 2007-10-18 2013-02-05 Siemens Medical Solutions Usa, Inc. Contextual searching of electronic records and visual rule construction
US20130138641A1 (en) * 2009-12-30 2013-05-30 Google Inc. Construction of text classifiers
US20130212475A1 (en) * 2010-11-01 2013-08-15 Koninklijke Philips Electronics N.V. Suggesting relevant terms during text entry
US8521526B1 (en) * 2010-07-28 2013-08-27 Google Inc. Disambiguation of a spoken query term
US8527262B2 (en) * 2007-06-22 2013-09-03 International Business Machines Corporation Systems and methods for automatic semantic role labeling of high morphological text for natural language processing applications
US8560321B1 (en) * 2011-01-05 2013-10-15 Interactions Corportion Automated speech recognition system for natural language understanding
US20140143054A1 (en) * 2005-10-31 2014-05-22 Yahoo! Inc. System for identifying and selecting advertising categories
US20140149399A1 (en) * 2010-07-22 2014-05-29 Google Inc. Determining user intent from query patterns
US9002725B1 (en) * 2005-04-20 2015-04-07 Google Inc. System and method for targeting information based on message content
US9093073B1 (en) * 2007-02-12 2015-07-28 West Corporation Automatic speech recognition tagging
US9177045B2 (en) * 2010-06-02 2015-11-03 Microsoft Technology Licensing, Llc Topical search engines and query context models
US9330168B1 (en) * 2010-02-19 2016-05-03 Go Daddy Operating Company, LLC System and method for identifying website verticals

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7366666B2 (en) * 2003-10-01 2008-04-29 International Business Machines Corporation Relative delta computations for determining the meaning of language inputs
US8380511B2 (en) * 2007-02-20 2013-02-19 Intervoice Limited Partnership System and method for semantic categorization
US8229743B2 (en) * 2009-06-23 2012-07-24 Autonomy Corporation Ltd. Speech recognition system

Patent Citations (151)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5418717A (en) * 1990-08-27 1995-05-23 Su; Keh-Yih Multiple score language processing system
US5077804A (en) * 1990-12-11 1991-12-31 Richard Dnaiel D Telecommunications device and related method
US5457768A (en) * 1991-08-13 1995-10-10 Kabushiki Kaisha Toshiba Speech recognition apparatus using syntactic and semantic analysis
US5278980A (en) * 1991-08-16 1994-01-11 Xerox Corporation Iterative technique for phrase query formation and an information retrieval system employing same
US5671329A (en) * 1993-03-09 1997-09-23 Nec Corporation Speech dialogue system in which a recognition and understanding process, application process, and voice input response are performed simultaneously with voice input
US5625748A (en) * 1994-04-18 1997-04-29 Bbn Corporation Topic discriminator using posterior probability or confidence scores
US5597312A (en) * 1994-05-04 1997-01-28 U S West Technologies, Inc. Intelligent tutoring method and system
US5937384A (en) * 1996-05-01 1999-08-10 Microsoft Corporation Method and system for speech recognition using continuous density hidden Markov models
US5966686A (en) * 1996-06-28 1999-10-12 Microsoft Corporation Method and system for computing semantic logical forms from syntax trees
US5867817A (en) * 1996-08-19 1999-02-02 Virtual Vision, Inc. Speech recognition manager
US6185531B1 (en) * 1997-01-09 2001-02-06 Gte Internetworking Incorporated Topic indexing method
US6961954B1 (en) * 1997-10-27 2005-11-01 The Mitre Corporation Automated segmentation, information extraction, summarization, and presentation of broadcast news
US6178398B1 (en) * 1997-11-18 2001-01-23 Motorola, Inc. Method, device and system for noise-tolerant language understanding
US20020059066A1 (en) * 1998-04-08 2002-05-16 O'hagan Timothy P. Speech recognition system and method for employing the same
US6236968B1 (en) * 1998-05-14 2001-05-22 International Business Machines Corporation Sleep prevention dialog based car system
US6937975B1 (en) * 1998-10-08 2005-08-30 Canon Kabushiki Kaisha Apparatus and method for processing natural language
US20020002454A1 (en) * 1998-12-07 2002-01-03 Srinivas Bangalore Automatic clustering of tokens from a corpus for grammar acquisition
US6243669B1 (en) * 1999-01-29 2001-06-05 Sony Corporation Method and apparatus for providing syntactic analysis and data structure for translation knowledge in example-based language translation
US6442524B1 (en) * 1999-01-29 2002-08-27 Sony Corporation Analyzing inflectional morphology in a spoken language translation system
US6223150B1 (en) * 1999-01-29 2001-04-24 Sony Corporation Method and apparatus for parsing in a spoken language translation system
US6519562B1 (en) * 1999-02-25 2003-02-11 Speechworks International, Inc. Dynamic semantic control of a speech recognition system
US20020128821A1 (en) * 1999-05-28 2002-09-12 Farzad Ehsani Phrase-based dialogue modeling with particular application to creating recognition grammars for voice-controlled user interfaces
US6816831B1 (en) * 1999-10-28 2004-11-09 Sony Corporation Language learning apparatus and method therefor
US7912702B2 (en) * 1999-11-12 2011-03-22 Phoenix Solutions, Inc. Statistical language model trained with semantic variants
US7006973B1 (en) * 2000-01-31 2006-02-28 Intel Corporation Providing information in response to spoken requests
US20070174057A1 (en) * 2000-01-31 2007-07-26 Genly Christopher H Providing programming information in response to spoken requests
US20010047262A1 (en) * 2000-02-04 2001-11-29 Alexander Kurganov Robust voice browser system and voice activated device controller
US20020029304A1 (en) * 2000-06-06 2002-03-07 Microsoft Corporation Method and system for defining semantic categories and actions
US20030163321A1 (en) * 2000-06-16 2003-08-28 Mault James R Speech recognition capability for a personal digital assistant
US20060106596A1 (en) * 2000-07-20 2006-05-18 Microsoft Corporation Ranking Parser for a Natural Language Processing System
US6766320B1 (en) * 2000-08-24 2004-07-20 Microsoft Corporation Search engine with natural language-based robust parsing for user query and relevance feedback learning
US20020032561A1 (en) * 2000-09-11 2002-03-14 Nec Corporation Automatic interpreting system, automatic interpreting method, and program for automatic interpreting
US7085708B2 (en) * 2000-09-23 2006-08-01 Ravenflow, Inc. Computer system with natural language to machine language translator
US20040078190A1 (en) * 2000-09-29 2004-04-22 Fass Daniel C Method and system for describing and identifying concepts in natural language text for information retrieval and processing
US6697793B2 (en) * 2001-03-02 2004-02-24 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration System, method and apparatus for generating phrases from a database
US20020169596A1 (en) * 2001-05-04 2002-11-14 Brill Eric D. Method and apparatus for unsupervised training of natural language processing units
US20030033333A1 (en) * 2001-05-11 2003-02-13 Fujitsu Limited Hot topic extraction apparatus and method, storage medium therefor
US7089226B1 (en) * 2001-06-28 2006-08-08 Microsoft Corporation System, representation, and method providing multilevel information retrieval with clarification dialog
US20030233224A1 (en) * 2001-08-14 2003-12-18 Insightful Corporation Method and system for enhanced data searching
US20030105638A1 (en) * 2001-11-27 2003-06-05 Taira Rick K. Method and system for creating computer-understandable structured medical data from natural language reports
US20030108334A1 (en) * 2001-12-06 2003-06-12 Koninklijke Philips Elecronics N.V. Adaptive environment system and method of providing an adaptive environment
US20050080629A1 (en) * 2002-01-18 2005-04-14 David Attwater Multi-mode interactive dialogue apparatus and method
US20040059577A1 (en) * 2002-06-28 2004-03-25 International Business Machines Corporation Method and apparatus for preparing a document to be read by a text-to-speech reader
US20040158558A1 (en) * 2002-11-26 2004-08-12 Atsuko Koizumi Information processor and program for implementing information processor
US7177817B1 (en) * 2002-12-12 2007-02-13 Tuvox Incorporated Automatic generation of voice content for a voice response system
US20060259294A1 (en) * 2002-12-16 2006-11-16 John Tashereau Voice recognition system and method
US20060259299A1 (en) * 2003-01-15 2006-11-16 Yumiko Kato Broadcast reception method, broadcast reception systm, recording medium and program (as amended)
US20040148170A1 (en) * 2003-01-23 2004-07-29 Alejandro Acero Statistical classifiers for spoken language understanding and command/control scenarios
US20050105712A1 (en) * 2003-02-11 2005-05-19 Williams David R. Machine learning
US7313523B1 (en) * 2003-05-14 2007-12-25 Apple Inc. Method and apparatus for assigning word prominence to new or previous information in speech synthesis
US20050049867A1 (en) * 2003-08-11 2005-03-03 Paul Deane Cooccurrence and constructions
US20050055209A1 (en) * 2003-09-05 2005-03-10 Epstein Mark E. Semantic language modeling and confidence measurement
US20050144000A1 (en) * 2003-12-26 2005-06-30 Kabushiki Kaisha Toshiba Contents providing apparatus and method
US7983896B2 (en) * 2004-03-05 2011-07-19 SDL Language Technology In-context exact (ICE) matching
US7739103B2 (en) * 2004-04-06 2010-06-15 Educational Testing Service Lexical association metric for knowledge-free extraction of phrasal terms
US20060056602A1 (en) * 2004-09-13 2006-03-16 Sbc Knowledge Ventures, L.P. System and method for analysis and adjustment of speech-enabled systems
US20100174716A1 (en) * 2004-09-30 2010-07-08 Google Inc. Methods and systems for improving text segmentation
US20060080098A1 (en) * 2004-09-30 2006-04-13 Nick Campbell Apparatus and method for speech processing using paralinguistic information in vector form
US20060074671A1 (en) * 2004-10-05 2006-04-06 Gary Farmaner System and methods for improving accuracy of speech recognition
US8311807B2 (en) * 2004-11-09 2012-11-13 Samsung Electronics Co., Ltd. Periodically extracting and evaluating frequency of occurrence data of unregistered terms in a document for updating a dictionary
US20080046244A1 (en) * 2004-11-30 2008-02-21 Yoshio Ohno Speech Recognition Device
US20060129397A1 (en) * 2004-12-10 2006-06-15 Microsoft Corporation System and method for identifying semantic intent from acoustic information
US20060235843A1 (en) * 2005-01-31 2006-10-19 Textdigger, Inc. Method and system for semantic search and retrieval of electronic documents
US20060173686A1 (en) * 2005-02-01 2006-08-03 Samsung Electronics Co., Ltd. Apparatus, method, and medium for generating grammar network for use in speech recognition and dialogue speech recognition
US20060173683A1 (en) * 2005-02-03 2006-08-03 Voice Signal Technologies, Inc. Methods and apparatus for automatically extending the voice vocabulary of mobile communications devices
US20060197764A1 (en) * 2005-03-02 2006-09-07 Yang George L Document animation system
US7996219B2 (en) * 2005-03-21 2011-08-09 At&T Intellectual Property Ii, L.P. Apparatus and method for model adaptation for spoken language understanding
US9002725B1 (en) * 2005-04-20 2015-04-07 Google Inc. System and method for targeting information based on message content
US7912701B1 (en) * 2005-05-04 2011-03-22 IgniteIP Capital IA Special Management LLC Method and apparatus for semiotic correlation
US20070005369A1 (en) * 2005-06-30 2007-01-04 Microsoft Corporation Dialog analysis
US20080221903A1 (en) * 2005-08-31 2008-09-11 International Business Machines Corporation Hierarchical Methods and Apparatus for Extracting User Intent from Spoken Utterances
US20140143054A1 (en) * 2005-10-31 2014-05-22 Yahoo! Inc. System for identifying and selecting advertising categories
US20070118374A1 (en) * 2005-11-23 2007-05-24 Wise Gerald B Method for generating closed captions
US8321220B1 (en) * 2005-11-30 2012-11-27 At&T Intellectual Property Ii, L.P. System and method of semi-supervised learning for spoken language understanding using semantic role labeling
US20070156747A1 (en) * 2005-12-12 2007-07-05 Tegic Communications Llc Mobile Device Retrieval and Navigation
US20070136048A1 (en) * 2005-12-13 2007-06-14 David Richardson-Bunbury System for classifying words
US20080319748A1 (en) * 2006-01-31 2008-12-25 Mikio Nakano Conversation System and Conversation Software
US20070225980A1 (en) * 2006-03-24 2007-09-27 Kabushiki Kaisha Toshiba Apparatus, method and computer program product for recognizing speech
US20080141125A1 (en) * 2006-06-23 2008-06-12 Firooz Ghassabian Combined data entry systems
US8255383B2 (en) * 2006-07-14 2012-08-28 Chacha Search, Inc Method and system for qualifying keywords in query strings
US20090077047A1 (en) * 2006-08-14 2009-03-19 Inquira, Inc. Method and apparatus for identifying and classifying query intent
US20080071536A1 (en) * 2006-09-15 2008-03-20 Honda Motor Co., Ltd. Voice recognition device, voice recognition method, and voice recognition program
US20080140389A1 (en) * 2006-12-06 2008-06-12 Honda Motor Co., Ltd. Language understanding apparatus, language understanding method, and computer program
US20080154870A1 (en) * 2006-12-26 2008-06-26 Voice Signal Technologies, Inc. Collection and use of side information in voice-mediated mobile search
US20110258204A1 (en) * 2007-01-19 2011-10-20 Wordnetworks, Inc. System for using keyword phrases on a page to provide contextually relevant content to users
US9093073B1 (en) * 2007-02-12 2015-07-28 West Corporation Automatic speech recognition tagging
US20080201136A1 (en) * 2007-02-19 2008-08-21 Kabushiki Kaisha Toshiba Apparatus and Method for Speech Recognition
US20080243820A1 (en) * 2007-03-27 2008-10-02 Walter Chang Semantic analysis documents to rank terms
US20120310628A1 (en) * 2007-04-25 2012-12-06 Samsung Electronics Co., Ltd. Method and system for providing access to information of potential interest to a user
US20110301943A1 (en) * 2007-05-17 2011-12-08 Redstart Systems, Inc. System and method of dictation for a speech recognition command system
US8527262B2 (en) * 2007-06-22 2013-09-03 International Business Machines Corporation Systems and methods for automatic semantic role labeling of high morphological text for natural language processing applications
US20090006343A1 (en) * 2007-06-28 2009-01-01 Microsoft Corporation Machine assisted query formulation
US20090006345A1 (en) * 2007-06-28 2009-01-01 Microsoft Corporation Voice-based search processing
US20090063426A1 (en) * 2007-08-31 2009-03-05 Powerset, Inc. Identification of semantic relationships within reported speech
US9317593B2 (en) * 2007-10-05 2016-04-19 Fujitsu Limited Modeling topics using statistical distributions
US20090094233A1 (en) * 2007-10-05 2009-04-09 Fujitsu Limited Modeling Topics Using Statistical Distributions
US20120317107A1 (en) * 2007-10-11 2012-12-13 Google Inc. Methods and Systems for Classifying Search Results to Determine Page Elements
US8370352B2 (en) * 2007-10-18 2013-02-05 Siemens Medical Solutions Usa, Inc. Contextual searching of electronic records and visual rule construction
US20090150156A1 (en) * 2007-12-11 2009-06-11 Kennewick Michael R System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US20090209345A1 (en) * 2008-02-14 2009-08-20 Aruze Gaming America, Inc. Multiplayer participation type gaming system limiting dialogue voices outputted from gaming machine
US20090259650A1 (en) * 2008-04-11 2009-10-15 Ebay Inc. System and method for identification of near duplicate user-generated content
US20090276419A1 (en) * 2008-05-01 2009-11-05 Chacha Search Inc. Method and system for improvement of request processing
US20090285474A1 (en) * 2008-05-15 2009-11-19 Berteau Stefan A System and Method for Bayesian Text Classification
US20090313227A1 (en) * 2008-06-14 2009-12-17 Veoh Networks, Inc. Searching Using Patterns of Usage
US20090327260A1 (en) * 2008-06-25 2009-12-31 Microsoft Corporation Constructing a classifier for classifying queries
US20100023506A1 (en) * 2008-07-22 2010-01-28 Yahoo! Inc. Augmenting online content with additional content relevant to user interests
US20100057687A1 (en) * 2008-09-04 2010-03-04 Microsoft Corporation Predicting future queries from log data
US20100094854A1 (en) * 2008-10-14 2010-04-15 Omid Rouhani-Kalleh System for automatically categorizing queries
US20110225019A1 (en) * 2008-10-14 2011-09-15 David Taylor Search, analysis and categorization
US20100114908A1 (en) * 2008-11-04 2010-05-06 Microsoft Corporation Relevant navigation with deep links into query
US20100121840A1 (en) * 2008-11-12 2010-05-13 Yahoo! Inc. Query difficulty estimation
US20100131835A1 (en) * 2008-11-22 2010-05-27 Srihari Kumar System and methods for inferring intent of website visitors and generating and packaging visitor information for distribution as sales leads or market intelligence
US20100138402A1 (en) * 2008-12-02 2010-06-03 Chacha Search, Inc. Method and system for improving utilization of human searchers
US20100145710A1 (en) * 2008-12-08 2010-06-10 Nuance Communications, Inc. Data-Driven Voice User Interface
US20100153317A1 (en) * 2008-12-11 2010-06-17 Samsung Electronics Co., Ltd Intelligent robot and control method thereof
US20100153106A1 (en) * 2008-12-15 2010-06-17 Verizon Data Services Llc Conversation mapping
US20100228762A1 (en) * 2009-03-05 2010-09-09 Mauge Karin System and method to provide query linguistic service
US20100268536A1 (en) * 2009-04-17 2010-10-21 David Suendermann System and method for improving performance of semantic classifiers in spoken dialog systems
US20110282913A1 (en) * 2009-04-30 2011-11-17 Oki Electric Industry Co., Ltd. Dialogue control system, method and computer readable storage medium, and multidimensional ontology processing system, method and computer readable storage medium
US20100293174A1 (en) * 2009-05-12 2010-11-18 Microsoft Corporation Query classification
US20110010367A1 (en) * 2009-06-11 2011-01-13 Chacha Search, Inc. Method and system of providing a search tool
US20110004462A1 (en) * 2009-07-01 2011-01-06 Comcast Interactive Media, Llc Generating Topic-Specific Language Models
US20110029533A1 (en) * 2009-07-28 2011-02-03 Prasantha Jayakody Method and system for tag suggestion in a tag-associated data-object storage system
US20110029311A1 (en) * 2009-07-30 2011-02-03 Sony Corporation Voice processing device and method, and program
US20110099003A1 (en) * 2009-10-28 2011-04-28 Masaaki Isozu Information processing apparatus, information processing method, and program
US20130006995A1 (en) * 2009-12-10 2013-01-03 Chesterdeal Limited Accessing stored electronic resources
US20130138641A1 (en) * 2009-12-30 2013-05-30 Google Inc. Construction of text classifiers
US20110184730A1 (en) * 2010-01-22 2011-07-28 Google Inc. Multi-dimensional disambiguation of voice commands
US20110191423A1 (en) * 2010-01-29 2011-08-04 Mcafee, Inc. Reputation management for network content classification
US20110213777A1 (en) * 2010-02-01 2011-09-01 Alibaba Group Holding Limited Method and Apparatus of Text Classification
US9330168B1 (en) * 2010-02-19 2016-05-03 Go Daddy Operating Company, LLC System and method for identifying website verticals
US20110238410A1 (en) * 2010-03-26 2011-09-29 Jean-Marie Henri Daniel Larcheveque Semantic Clustering and User Interfaces
US20110238408A1 (en) * 2010-03-26 2011-09-29 Jean-Marie Henri Daniel Larcheveque Semantic Clustering
US20120191453A1 (en) * 2010-04-26 2012-07-26 Cyberpulse L.L.C. System and methods for matching an utterance to a template hierarchy
US20110288868A1 (en) * 2010-05-19 2011-11-24 Lloyd Matthew I Disambiguation of contact information using historical data
US9177045B2 (en) * 2010-06-02 2015-11-03 Microsoft Technology Licensing, Llc Topical search engines and query context models
US20110301955A1 (en) * 2010-06-07 2011-12-08 Google Inc. Predicting and Learning Carrier Phrases for Speech Input
US20110314012A1 (en) * 2010-06-16 2011-12-22 Microsoft Corporation Determining query intent
US20110314390A1 (en) * 2010-06-18 2011-12-22 Microsoft Corporation Techniques to dynamically modify themes based on messaging
US20120016873A1 (en) * 2010-07-16 2012-01-19 Michael Mathieson Method and system for ranking search results based on categories
US20140149399A1 (en) * 2010-07-22 2014-05-29 Google Inc. Determining user intent from query patterns
US8521526B1 (en) * 2010-07-28 2013-08-27 Google Inc. Disambiguation of a spoken query term
US20120053927A1 (en) * 2010-09-01 2012-03-01 Microsoft Corporation Identifying topically-related phrases in a browsing sequence
US20120078919A1 (en) * 2010-09-29 2012-03-29 Fujitsu Limited Comparison of character strings
US20120084291A1 (en) * 2010-09-30 2012-04-05 Microsoft Corporation Applying search queries to content sets
US20120096033A1 (en) * 2010-10-14 2012-04-19 Microsoft Corporation Disambiguation of Entities
US20130212475A1 (en) * 2010-11-01 2013-08-15 Koninklijke Philips Electronics N.V. Suggesting relevant terms during text entry
US20120158703A1 (en) * 2010-12-16 2012-06-21 Microsoft Corporation Search lexicon expansion
US20120158693A1 (en) * 2010-12-17 2012-06-21 Yahoo! Inc. Method and system for generating web pages for topics unassociated with a dominant url
US8560321B1 (en) * 2011-01-05 2013-10-15 Interactions Corportion Automated speech recognition system for natural language understanding
US20120290293A1 (en) * 2011-05-13 2012-11-15 Microsoft Corporation Exploiting Query Click Logs for Domain Detection in Spoken Language Understanding

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9620147B2 (en) 2012-06-19 2017-04-11 International Business Machines Corporation Intent discovery in audio or text-based conversation
US8983840B2 (en) * 2012-06-19 2015-03-17 International Business Machines Corporation Intent discovery in audio or text-based conversation
US20180203833A1 (en) * 2016-11-04 2018-07-19 Semantic Machines, Inc. Data collection for a new conversational dialogue system
US10824798B2 (en) * 2016-11-04 2020-11-03 Semantic Machines, Inc. Data collection for a new conversational dialogue system
US10713288B2 (en) 2017-02-08 2020-07-14 Semantic Machines, Inc. Natural language content generator
US10762892B2 (en) 2017-02-23 2020-09-01 Semantic Machines, Inc. Rapid deployment of dialogue system
US10586530B2 (en) 2017-02-23 2020-03-10 Semantic Machines, Inc. Expandable dialogue system
US11069340B2 (en) 2017-02-23 2021-07-20 Microsoft Technology Licensing, Llc Flexible and expandable dialogue system
US11132499B2 (en) 2017-08-28 2021-09-28 Microsoft Technology Licensing, Llc Robust expandable dialogue system
US11182565B2 (en) * 2018-02-23 2021-11-23 Samsung Electronics Co., Ltd. Method to learn personalized intents
US11314940B2 (en) 2018-05-22 2022-04-26 Samsung Electronics Co., Ltd. Cross domain personalized vocabulary learning in intelligent assistants
US20200142719A1 (en) * 2018-11-02 2020-05-07 International Business Machines Corporation Automatic generation of chatbot meta communication
CN113657120A (en) * 2021-08-23 2021-11-16 深圳科卫机器人科技有限公司 Human-computer interaction intention analysis method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
US8688453B1 (en) 2014-04-01

Similar Documents

Publication Publication Date Title
US8688453B1 (en) Intent mining via analysis of utterances
US8812299B1 (en) Class-based language model and use
US8606581B1 (en) Multi-pass speech recognition
US8914277B1 (en) Speech and language translation of an utterance
US11093707B2 (en) Adversarial training data augmentation data for text classifiers
US9311913B2 (en) Accuracy of text-to-speech synthesis
CN112115706B (en) Text processing method and device, electronic equipment and medium
CN106406806B (en) Control method and device for intelligent equipment
US10740380B2 (en) Incremental discovery of salient topics during customer interaction
US20200184955A1 (en) Image-based approaches to identifying the source of audio data
US11189269B2 (en) Adversarial training data augmentation for generating related responses
CN108428446A (en) Audio recognition method and device
CN111428010B (en) Man-machine intelligent question-answering method and device
US10783879B2 (en) System and method for rule based modifications to variable slots based on context
CN111145733B (en) Speech recognition method, speech recognition device, computer equipment and computer readable storage medium
EP2801092A1 (en) Methods, apparatuses and computer program products for implementing automatic speech recognition and sentiment detection on a device
CN108986790A (en) The method and apparatus of voice recognition of contact
CN112530408A (en) Method, apparatus, electronic device, and medium for recognizing speech
US9984687B2 (en) Image display device, method for driving the same, and computer readable recording medium
US10607601B2 (en) Speech recognition by selecting and refining hot words
CN111949255A (en) Script compiling method, device, equipment and storage medium based on voice
US11625630B2 (en) Identifying intent in dialog data through variant assessment
KR20170010978A (en) Method and apparatus for preventing voice phishing using pattern analysis of communication content
KR101801250B1 (en) Method and system for automatically tagging themes suited for songs
CN112447173A (en) Voice interaction method and device and computer storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JOSHI, SACHINDRA;GODBOLE, SHANTANU;REEL/FRAME:032258/0913

Effective date: 20110211

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION