US20090287626A1 - Multi-modal query generation - Google Patents

Multi-modal query generation Download PDF

Info

Publication number
US20090287626A1
US20090287626A1 US12/200,648 US20064808A US2009287626A1 US 20090287626 A1 US20090287626 A1 US 20090287626A1 US 20064808 A US20064808 A US 20064808A US 2009287626 A1 US2009287626 A1 US 2009287626A1
Authority
US
United States
Prior art keywords
query
wildcard
search
input
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/200,648
Inventor
Timothy Seung Yoon Paek
Bo Thiesson
Yun-Cheng Ju
Bongshin Lee
Christopher A. Meek
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US12/200,648 priority Critical patent/US20090287626A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JU, YUN-CHENG, LEE, BONGSHIN, MEEK, CHRISTOPHER A., PAEK, TIMOTHY SEUNG YOON, THIESSON, BO
Publication of US20090287626A1 publication Critical patent/US20090287626A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3322Query formulation using system suggestions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Definitions

  • the Internet continues to make available ever-increasing amounts of information which can be stored in databases and accessed therefrom.
  • mobile and portable terminals e.g., cellular telephones, personal data assistants (PDAs), smartphones and other devices
  • PDAs personal data assistants
  • users are becoming more mobile, and hence, more reliant upon information accessible via the Internet. Accordingly, users often search network sources such as the Internet from their mobile device.
  • search query is constructed that can be submitted to a search engine.
  • search engine matches this search query to actual search results.
  • search queries were constructed merely of keywords that were matched to a list of results based upon factors such as relevance, popularity, preference, etc.
  • the Internet and the World Wide Web continue to evolve rapidly with respect to both volume of information and number of users. As a whole, the Web provides a global space for accumulation, exchange and dissemination of information. As mobile devices become more and more commonplace to access the Web, the number of users continues to increase.
  • a user knows the name of a site, server or URL (uniform resource locator) to the site or server that is desired for access.
  • the user can access the site, by simply typing the URL in an address bar of a browser to connect to the site.
  • the user does not know the URL and therefore has to ‘search’ the Web for relevant sources and/or URL's.
  • search engines are regularly employed.
  • a search engine to facilitate locating and accessing sites based upon alphanumeric keywords and/or Boolean operators.
  • these keywords are text- or speech-based queries, although, speech is not always reliable.
  • a search engine is a tool that facilitates web navigation based upon textual (or speech-to-text) entry of a search query usually comprising one or more keywords.
  • the search engine retrieves a list of websites, typically ranked based upon relevance to the query. To enable this functionality, the search engine must generate and maintain a supporting infrastructure.
  • the search engine Upon textual entry of one or more keywords as a search query, the search engine retrieves indexed information that matches the query from an indexed database, generates a snippet of text associated with each of the matching sites and displays the results to the user. The user can thereafter scroll through a plurality of returned sites to attempt to determine if the sites are related to the interests of the user.
  • this can be an extremely time-consuming and frustrating process as search engines can return a substantial number of sites. More often than not, the user is forced to narrow the search iteratively by altering and/or adding keywords and Boolean operators to obtain the identity of websites including relevant information, again by typing (or speaking) the revised query.
  • search engines typically analyze content of alphanumeric search queries in order to return results.
  • search engines merely parse alphanumeric queries into ‘keywords’ and subsequently perform searches based upon a defined number of instances of each of the keywords in a reference.
  • the innovation disclosed and claimed herein in one aspect thereof, comprises a search system and corresponding methodologies that can couple speech, text and touch for search interfaces and engines.
  • the innovation can combine speech, text, and touch to enhance usability and efficiency of search mechanisms. Accordingly, it can be possible to locate more meaningful and comprehensive results as a function of a search query.
  • a multi-modal search management system employs a query administration component to analyze multi-modal input (e.g., text, speech, touch) and to generate appropriate search criteria. Accordingly, comprehensive and meaningful search results can be gathered.
  • multi-modal input e.g., text, speech, touch
  • search criteria e.g., text, speech, touch
  • comprehensive and meaningful search results can be gathered.
  • the features of the innovation can be incorporated into a search engine or, alternatively, can work in conjunction with a search engine.
  • the innovation can be incorporated or retrofitted into existing search engines and/or interfaces.
  • Yet other aspects employ the features, functionalities and benefits of the innovation in mobile search applications, which has strategic importance given the increasing usage of mobile devices as a primary computing device.
  • mobile devices are not always configured or equipped with full-function keyboards, thus, the multi-modal functionality of the innovation can be employed to greatly enhance comprehensiveness of search.
  • machine learning and reasoning employs a probabilistic and/or statistical-based analysis to prognose or infer an action that a user desires to be automatically performed.
  • FIG. 1 illustrates an example block diagram of a system that establishes a query from a multi-modal input in accordance with aspects of the innovation.
  • FIG. 2 illustrates an example user interface in accordance with an aspect of the innovation.
  • FIG. 3 illustrates an example of a typical speech recognition system in accordance with an aspect of the innovation.
  • FIG. 4 illustrates an alternative example block diagram of a speech recognition system in accordance with an aspect of the innovation.
  • FIG. 5 illustrates an example flow chart of procedures that facilitate generating a query from a multi-modal input in accordance with an aspect of the innovation.
  • FIG. 6 illustrates an example flow chart of procedures that facilitate analyzing a multi-modal input in accordance with an aspect of the innovation.
  • FIG. 7 illustrates an example block diagram of a query administration component in accordance with an aspect of the innovation.
  • FIG. 8 illustrates an example analysis component in accordance with an aspect of the innovation.
  • FIG. 9 illustrates a block diagram of a computer operable to execute the disclosed architecture.
  • FIG. 10 illustrates a schematic block diagram of an exemplary computing environment in accordance with the subject innovation.
  • a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a server and the server can be a component.
  • One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers.
  • the term to “infer” or “inference” refer generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
  • screen While certain ways of displaying information to users are shown and described with respect to certain figures as screenshots, those skilled in the relevant art will recognize that various other alternatives can be employed.
  • the terms “screen,” “web page,” and “page” are generally used interchangeably herein.
  • the pages or screens are stored and/or transmitted as display descriptions, as graphical user interfaces, or by other methods of depicting information on a screen (whether personal computer, PDA, mobile telephone, or other suitable device, for example) where the layout and information or content to be displayed on the page is stored in memory, database, or another storage facility.
  • the innovation discloses systems (and corresponding methodologies) that expand the conventional capabilities of voice-activated search to allow users to explicitly constrain the recognition results to match the queery by supplementing the speech with additional criteria, for example, to provide partial knowledge in the form of text hints. In doing so, a multi-modal approach is presented which incorporates voice with text, touch, etc. This multi-modal functionality enables users more accurately access desired information.
  • the innovation discloses a multi-modal search interface that tightly couples speech, text and touch by utilizing regular expression queries that employ ‘wildcards,’ where parts of the query can be input via different modalities. For instance, modalities such as speech, text, touch and gestures can be used at any point in the query construction process.
  • modalities such as speech, text, touch and gestures can be used at any point in the query construction process.
  • the innovation can represent uncertainty in a spoken recognized result as wildcards in a regular expression query.
  • the innovation allows users to express their own uncertainty about parts of their utterance using expressions such as “something” or “whatchamacallit” which can then be translated into or interpreted as wildcards.
  • FIG. 1 illustrates an example block diagram of a system 100 that employs a multi-modal search management system 102 to construct meaningful search results based upon a multi-modal input query.
  • multi-modal can refer to most any combination of text, voice, touch, gestures, etc. While examples described herein are directed to a specific multi-modal example that employs text, voice and touch only, it is to be understood that other examples exist that employ a subset of these identified modalities. As well, it is to be understood that other examples exist that employ disparate modalities in combination with or separate from those described herein. For instance, other examples can employ gesture input, pattern recognition, among others to establish a search query. Similarly, while the examples are directed to mobile device implementations, it is to be understood that the features, functions and benefits of the innovation can be applied to most any computing experience, platform and/or device without departing from the spirit and scope of this disclosure and claims appended hereto.
  • the multi-modal search management component 102 can include a query administration component 104 and a search engine component 106 .
  • these subcomponents enable a user to establish a query using multiple modalities and to search for data and other resources using the multi-modal query, for example, a query constructed using text, voice, touch, gestures, etc.
  • the innovation presents a multi-modal search management system 102 which can employ user interfaces (UIs) that not only can facilitate touch and text whenever speech fails, but also allows users to assist the speech recognizer via text hints.
  • UIs user interfaces
  • the innovation in generating a search query, can also take advantage of most any partial knowledge users may have about a business listing by letting them express their uncertainty in a simple, intuitive manner.
  • leveraging multi-modal refinement resulted in a 28% relative reduction in error rate.
  • Providing text hints along with the spoken utterance resulted in even greater relative reduction, with dramatic gains in recovery for each additional character.
  • the query administration component 104 can receive multi-modal input(s), generate an appropriate query and instruct the search engine component 106 accordingly.
  • the query administration component 104 enables one modality to be supplemented with another thereby enhancing interpretation and ease of use in locating meaningful search results.
  • speech input can be supplemented with textual hints (e.g., a beginning letter of a word) to enhance recognition accuracy.
  • textual input can be supplemented with speech to enhance scope of a search query.
  • system generated and user prompted wildcards can be used to facilitate, improve, increase or boost functionality.
  • the multi-modal search management system 102 can generate (or otherwise employ) a UI as illustrated in FIG. 2 .
  • the multi-modal UI tightly couples speech with touch and text in at least two directions; users can not only use touch and text to clarify, supplement or generate their queries whenever recognition of speech is not sufficiently reliable, but they can also use speech whenever text entry becomes burdensome.
  • the innovation enables leverage of this tight coupling by transforming a typical n-best list, or a list of phrase alternates from the recognizer, into a palette of words with which users can compose and refine queries, e.g., as described in the Related Application identified above.
  • the innovation can also take advantage of most any partial knowledge users may have about the words, e.g., of the business listing. For example, a user may only remember that the listing starts with an “s” and also contains the word “avenue”. Likewise, the user may only remember “Saks something,” where the word “something” is used to express uncertainty about what words follow. While the word ‘something’ is used in the aforementioned example, it is to be appreciated that most any desired word or indicator can be used without departing from the spirit/scope of the innovation and claims appended hereto.
  • the innovation represents this uncertainty as wildcards in an enhanced regular expression search of the listings, which exploits the popularity of the listings.
  • This disclosure is focused on three phases. First, a description of the system 100 architecture together with a contrast against a typical architecture of conventional voice search applications. The specification also details the backend search infrastructure deployed for fast and efficient retrieval. Second, the disclosure presents an example UI that highlights the innovation's tightly coupled multi-modal generation capabilities and support of partial knowledge with several user scenarios.
  • ADA examples described herein are included to provide perspective to the features, functions and benefits of the innovation and are not intended to limit the scope of the disclosure and appended claims in any manner.
  • the following ADA example references an implementation where users can request telephone or address information of residential and business listings using speech recognition via a network (e.g., Internet) equipped mobile device (e.g., smartphone, cell phone, personal digital assistant, personal media player, navigation system, pocket PC . . . ).
  • a network e.g., Internet
  • mobile device e.g., smartphone, cell phone, personal digital assistant, personal media player, navigation system, pocket PC . . .
  • ADA is a growing industry with over 30 million U.S. callers per month.
  • Many voice search applications focus exclusively on telephony-based ADA.
  • voice search applications encourage users to “just say what you want” in order to obtain useful mobile content such as ADA.
  • users when users only remember part of what they are looking for, they are forced to guess, even though what they know may be sufficient to retrieve the desired information.
  • it is proposed to expand the capabilities of voice search to allow users to explicitly express their uncertainties as part of their queries, and as such, to provide partial knowledge.
  • the disclosure highlights the enhanced user experience uncertain expressions affords and delineates how to perform language modeling and information retrieval.
  • Voice search applications encourage users to “just say what you want” in order to obtain useful mobile content such as business listings, driving directions, movie times, etc. Because certain types of information require recognition of a large database of choices, voice search is often formulated as both a recognition and information retrieval (IR) task, where a spoken utterance is first converted into text and then used as a search query for IR. ADA exemplifies the challenges of voice search. Not only are there millions of possible listings (e.g., 18 million in the US alone), but users also do not frequently know, remember, or say the exact business names as listed in the directory. As illustrated in FIG. 2 , in some cases, users think they know but are mistaken (e.g., “Le Sol Spa” for the listing “Le Soleil Tanning and Spa”).
  • the innovation enables expansion of the capabilities of voice search to enable users to explicitly express their uncertainties as part of their queries, and as such, to allow systems to leverage most any partial knowledge contained in those queries.
  • Voice search applications with a UI as shown in FIG. 2 can offer even richer user experiences.
  • the innovation displays not only the top matches for uncertain expressions, but also the query itself for users to edit, for example, in case they wanted to refine their queries using text as set for the in the Related Application identified above.
  • FIG. 2 illustrates a screenshot of results for the spoken utterance “Le S Something Spa”, from the previous example, as well the more general expression “Le Something Spa”. Note that the system not only retrieved exact matches for the utterances as a regular expression query, but also approximate matches.
  • n-gram statistical language models are typically used to compress and generalize across listings as well as their observed user variations.
  • the training data can be modified. Given that not enough occurrences of the word “something” appeared in the training sentences for it to be accurately recognized (e.g., 88), that number was boosted artificially by creating pseudo-listings from the original data. For every listing which was not a single word (e.g., “Starbucks”), the innovation adds new listings with “*” and “i-*” replacing individual words, where i denotes the initial letter of the word being replaced.
  • an index and retrieval algorithm can be used that could quickly find likely matches for the regular expression. This is accomplished by encoding the directory listing as a k-best suffix array. Because a k-best suffix array is sorted by both lexicographic order and most any figure of merit, such as the popularity of listings in the call logs, it is a convenient data structure for finding the most likely, or in this case, the most popular matches for a substring, especially when there could be many matches. For example, for the query “H* D*”, the k-best suffix array would quickly bring up “Home Depot” as the top match.
  • a k-best suffix array which provides popular exact matches to the listings
  • an improved term frequency can be implemented—e.g., inverse document frequency (TFIDF) algorithm.
  • TFIDF inverse document frequency
  • voice search typically utilizes approximate search techniques, such as TFIDF, because they treat the output as just a bag of words. This is advantageous when users either incorrectly remember the order of words in a listing, or add spurious words.
  • the two IR methods are flip sides of each other.
  • the strength of finding exact matches is that the innovation can leverage most any partial knowledge users may have about their queries (e.g., word order) as well as the popularity of any matches. Its weakness is that it assumes users are correct about their partial knowledge. On the other hand, this is the strength of finding approximate matches; it is indifferent to word order and other mistakes users often make.
  • FIG. 3 displays an example architecture for typical voice search applications.
  • an utterance can be recognized using an n-gram statistical language model (SLM) that compresses and generalizes across training sentences.
  • SLM statistical language model
  • the training sentences comprise not only the exact listings and business categories but also alternative expressions for those listings.
  • the output of the speech recognizer is an n-best list containing phrases that may or may not match any of the training sentences. This is often acceptable if the phrases are submitted to an information retrieval (IR) engine that utilizes techniques which treat the phrases as just bags of words.
  • IR information retrieval
  • the IR engine retrieves matches from an index, which is typically a subset of the language model training sentences, such as the exact listings along with their categories.
  • an index typically a subset of the language model training sentences, such as the exact listings along with their categories.
  • voice search applications with a graphical UI very often display an n-best list to users for selection, at which point users can either select a result (e.g., phrase) or retry their utterance.
  • FIG. 4 illustrates an alternative example system architecture in accordance with the innovation.
  • the ‘Search Vox’ component illustrated in FIG. 4 is analogous to the multi-modal management system 102 of FIG. 1 .
  • first, high confidence results immediately go to the IR engine.
  • Second, users are shown the n-best list, though the interaction dynamics are fundamentally different than that of conventional systems.
  • subsequent refinement e.g., as set forth in the Related Application referenced above, users can not only select a phrase from the n-best list, but also the individual words which make up those phrases thereby refining search results by way of effectively drilling into a set of search results.
  • the n-best list is essentially treated as a sort of word palette or ‘bag of words’ from which users can select out those words that the speech recognizer heard or interpreted correctly, though they may appear in a different phrase. For example, suppose a user says “home depot,” but because of background noise, the phrase does not occur in the n-best list. Suppose, however, that the phrase “home office design” appears in the list. With typical (or conventional) voice search applications, the user would have to start over.
  • the user can simply select the word “home” and invoke the backend which finds the most popular listings that contain the word.
  • the system can measure popularity by the frequency with which a business listing appears in the ADA call logs, for example, for Live Local Search.
  • regular expressions can be used.
  • FIG. 4 can be deployed within the higher level components of FIG. 1 , e.g., multi-modal search management system 102 , query administration component 104 and search engine component 106 .
  • multi-modal search management system 102 e.g., query administration component 104 and search engine component 106 .
  • search engine component 106 e.g., search engine component 106 .
  • Three other sub-components of the system architecture are discussed below: the IR engine, the supplement generator, and the list filter ( FIG. 4 ).
  • the index data structure chosen to use for regular expression matching can be based upon k-best suffix arrays. Similar to traditional suffix arrays, k-best suffix arrays arrange all suffixes of the listings into an array. While traditional suffix arrays arrange the suffixes in lexicographical order only, the k-best suffix arrays of the innovation arrange the suffixes according to two alternating orders—a lexicographical ordering and an ordering based on a figure of merit, such as popularity, preference, etc. The arrangement of the array borrows from ideas seen in the construction of KD-trees.
  • the k-best suffix array is sorted by both lexicographic order and popularity, it is a convenient structure for finding the most popular matches for a substring, especially when there are many matches.
  • the k most popular matches can be found in time close to O(log N) for most practical situations, and with a worst case guarantee of O(sqrt N), where N is the number of characters in the listings.
  • a standard suffix array enables locating most all matches to a substring in O(log N) time, but does not impose any popularity ordering on the matches. To find the most popular matches, the user would have to traverse them all.
  • the standard suffix array may be sufficiently fast when searching for the k-best matches to a large substring since there will not be many matches to traverse in this case.
  • the situation is, however, completely different for a short substring such as, for example, ‘a’.
  • a user would have to traverse all dictionary entries containing an ‘a’, which is not much better than traversing all suffixes in the listings—in O(N) time.
  • it is possible to continue a search in a k-best suffix array from the position it was previously stopped.
  • a simple variation of k-best suffix matching will therefore allow look up of the k-best (most popular) matches for an arbitrary wildcard query, such as, for instance ‘f* m* ban*’.
  • the approach proceeds as the k-best suffix matching above for the largest substring without a wildcard (‘ban’).
  • ban wildcard
  • the innovation now evaluates the full wildcard query against the full listing entry for the suffix and continues the search until k valid expansions to the wildcard query are found.
  • the k-best suffix array can also be used to exclude words in the same way by continuing the search until expansions without the excluded words are found.
  • the querying process is an iterative process, which gradually eliminates the wildcards in the text string. Whenever the largest substring in the wildcard query does not change between iterations, there is an opportunity to further improve the computational efficiency of the expansion algorithm. In this case, the k-best suffix matching can just be continued from the point where the previous iteration ended.
  • the RegEx engine With an efficient k-best suffix array matching algorithm available for the RegEx engine, it can be deployed, for example onto a mobile device, because of the latencies associated with sending information back and forth along a wireless data channel. Speech recognition for ADA already takes several seconds to return an n-best list. It is desirable to provide short latencies for wildcard queries—the innovation is capable of enhancing (or shortening) the latencies.
  • the innovation implements an IR engine based on an improved term frequency—inverse document frequency (TFIDF) algorithm.
  • TFIDF term frequency—inverse document frequency
  • the word is sent as a query to a RegEx generator which transforms it into a wildcard query.
  • the generator can simply insert wildcards before spaces, as well as to the end of the entire query. For example, for the query “home”, the generator could produce the regular expression “home*”.
  • the RegEx or wildcard generator uses minimal edit distance (with equal edit operation costs) to align the phrases at the word level. Once words are aligned, minimal edit distance is again applied to align the characters. Whenever there is disagreement between any aligned words or characters, a wildcard is substituted in its place. For example, for an n-best list containing the phrases “home depot” and “home office design,” the RegEx generator would produce “home * de*”. After an initial query is formulated, the RegEx generator applies heuristics to clean up the regular expression (e.g., in an aspect, no word can have more than one wildcard) before it is used to retrieve k-best matches from the RegEx engine. The RegEx generator is invoked in this form whenever speech is utilized, such as for leveraging partial knowledge.
  • minimal edit distance with equal edit operation costs
  • the innovation's interface treats a list of phrases as a word palette. Because the word palette is most useful when it is filled with words to choose from, whenever the recognizer produces a short n-best list with less phrases than can appear in the user interface (which for a pocket PC interface is most often 8 items as shown in FIG. 2 ), or whenever a no-speech query has been submitted (e.g., “home*” in the previous example), it is the job of the supplement generator ( FIG. 4 ) to retrieve matches from the backend for the UI.
  • the supplement generator attempts to find exact matches from the RegEx engine first since it will be obvious to users why they were retrieved. Space permitting, approximate matches are also retrieved from the IR engine. This can be accomplished in the following manner: If any exact matches have already been found, the supplement generator will use those exact matches as queries to the IR engine until enough matches have been retrieved. If there are no exact matches, the supplement generator will use whatever query was submitted to the RegEx generator as the query.
  • the List filter simply uses a wildcard query to filter out an n-best list obtained from the speech recognizer.
  • the List filter is used primarily for text hints, which are discussed infra.
  • the innovation can display an n-best list to the user, making an interface (e.g., UI of FIG. 2 ) appear, at least at first blush, like any other voice search application.
  • This aspect facilitates a default correction mechanism users may expect of speech applications; namely, that when their utterances fail to be correctly recognized, they may still select from a list of choices, provided that their utterance exists among these choices.
  • the interface endows users with a larger arsenal of recovery strategies—for example, text hints, word selection from a word palette or bag of words, etc.
  • FIG. 5 illustrates a methodology of generating a multi-modal query in accordance with an aspect of the innovation. While, for purposes of simplicity of explanation, the one or more methodologies shown herein, e.g., in the form of a flow chart, are shown and described as a series of acts, it is to be understood and appreciated that the subject innovation is not limited by the order of acts, as some acts may, in accordance with the innovation, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the innovation.
  • multi-modal input is received, for example, by way of the UI of FIG. 2 .
  • multi-modal input can include text, speech, touch, gesture, etc. input. While examples are described herein, it is to be understood that the multi-modal input can render and employ UIs that are capable of receiving most any protocol combination. Additionally, the inputs can be received at different timings as appropriate.
  • the multi-modal input is analyzed to interpret the input. For instance, text can be parsed, speech can be converted, etc.
  • An appropriate search query can be generated at 506 .
  • a search query can be established to increase accuracy and meaningfulness of results.
  • results in accordance with the search query can be obtained at 508 .
  • a multi-modal input is received, for example, text, speech, touch, gesture, etc.
  • a determination is made to conclude if the document includes text data. If so, at 606 , the data can be parsed and analyzed to determine keywords, text hints and/or context of the text. Additionally, a determination can be made if wildcards should be used to effect a query.
  • a search query can be generated.
  • wildcards can be used as appropriate to establish a comprehensive search query.
  • TFIDF algorithms can be applied where appropriate.
  • other logic and inferences can be made to establish user intent based upon the multi-modal input thereby establishing a comprehensive query that can be used to fetch meaningful search results.
  • the query administration component 104 can include a query generation component 702 and an analysis component 704 . Together these sub-components ( 702 , 704 ) facilitate transformation of a multi-modal input into a comprehensive search query.
  • the query generation component 702 employs input from the analysis component 704 to establish an ample and comprehensive search query that will produce results in line with intentions of the user input.
  • the innovation can evaluate the multi-modal input.
  • the analysis component 704 can be employed to effect this evaluation.
  • Logic can be employed in connection with the analysis component 704 to effect the evaluation.
  • FIG. 8 illustrates an example block diagram of an analysis component 704 .
  • the analysis component 704 can include a text evaluation component 802 , a speech evaluation component 804 and a gesture evaluation component 806 , all of which are capable of evaluating multi-modal input in efforts to establish comprehensive search queries. While specific modality evaluation components are shown in FIG. 8 ( 802 , 804 , 806 ), it is to be understood that alternative aspects can include other evaluation components without departing from the spirit and/or scope of the innovation.
  • logic component 808 can be employed to effect the evaluation and/or interpretation of the input.
  • logic component 808 can include rules-based and/or inference-based (e.g., machine learning and reasoning (MLR)) logic.
  • MLR machine learning and reasoning
  • the innovation can employ MLR which facilitates automating one or more features in accordance with the subject specification.
  • the subject innovation e.g., in connection with input interpretation or query generation
  • Such classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed.
  • a support vector machine is an example of a classifier that can be employed.
  • the SVM operates by finding a hypersurface in the space of possible inputs, which the hypersurface attempts to split the triggering criteria from the non-triggering events. Intuitively, this makes the classification correct for testing data that is near, but not identical to training data.
  • Other directed and undirected model classification approaches include, e.g., na ⁇ ve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.
  • the subject innovation can employ classifiers that are explicitly trained (e.g., via a generic training data) as well as implicitly trained (e.g., via observing user behavior, receiving extrinsic information).
  • SVM's are configured via a learning or training phase within a classifier constructor and feature selection module.
  • the classifier(s) can be used to automatically learn and perform a number of functions, including but not limited to determining according to a predetermined criteria how to interpret an input, how to establish a query, etc.
  • a user can select words by way of a touch screen thereby establishing a search query. Additionally, the selected words can be chosen (or otherwise identified) for inclusion or alternatively, exclusion, from a set of search results. In other words, a selection can be used as a filter to screen out results that contain a particular word or set of words. Moreover, a selection can be supplemented with speech (or other modality) thereby enhancing the searching capability of the innovation. While many of the examples described herein are directed to selection of words from an n-best list, it is to be understood that the innovation can treat most any display rendering as a bag of words thereby enabling selection to enhance comprehensive searching and query construction.
  • the innovation can support query generation via multi-modal input by combining speech with text hints. Just in the way that users can resort to touch and text when speech fails, they can also resort to speech whenever typing becomes burdensome, or when they feel they have provided enough text hints for the recognizer to identify their query.
  • the user starts typing “m” for the intended query “mill creek family practice,” but because the query is too long to type, the user utters the intended query after pressing a trigger or specific functional soft key button. After the query returns from the backend, all choices in the list now start with an “m” and indeed include the user utterance may be displayed.
  • the innovation can achieve this functionality by first converting the text hint in the textbox into a wildcard query and then using that to filter the n-best list as well as to retrieve additional matches from the RegEx engine. In principle, the innovation acknowledges that the query should be used to bias the recognition of the utterance in the speech engine itself.
  • FIG. 9 there is illustrated a block diagram of a computer operable to execute the disclosed architecture.
  • FIG. 9 and the following discussion are intended to provide a brief, general description of a suitable computing environment 900 in which the various aspects of the innovation can be implemented. While the innovation has been described above in the general context of computer-executable instructions that may run on one or more computers, those skilled in the art will recognize that the innovation also can be implemented in combination with other program modules and/or as a combination of hardware and software.
  • program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • inventive methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices.
  • the illustrated aspects of the innovation may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network.
  • program modules can be located in both local and remote memory storage devices.
  • Computer-readable media can be any available media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer-readable media can comprise computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
  • Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
  • the exemplary environment 900 for implementing various aspects of the innovation includes a computer 902 , the computer 902 including a processing unit 904 , a system memory 906 and a system bus 908 .
  • the system bus 908 couples system components including, but not limited to, the system memory 906 to the processing unit 904 .
  • the processing unit 904 can be any of various commercially available processors. Dual microprocessors and other multi-processor architectures may also be employed as the processing unit 904 .
  • the system bus 908 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures.
  • the system memory 906 includes read-only memory (ROM) 910 and random access memory (RAM) 912 .
  • ROM read-only memory
  • RAM random access memory
  • a basic input/output system (BIOS) is stored in a non-volatile memory 910 such as ROM, EPROM, EEPROM, which BIOS contains the basic routines that help to transfer information between elements within the computer 902 , such as during start-up.
  • the RAM 912 can also include a high-speed RAM such as static RAM for caching data.
  • the computer 902 further includes an internal hard disk drive (HDD) 914 (e.g., EIDE, SATA), which internal hard disk drive 914 may also be configured for external use in a suitable chassis (not shown), a magnetic floppy disk drive (FDD) 916 , (e.g., to read from or write to a removable diskette 918 ) and an optical disk drive 920 , (e.g., reading a CD-ROM disk 922 or, to read from or write to other high capacity optical media such as the DVD).
  • the hard disk drive 914 , magnetic disk drive 916 and optical disk drive 920 can be connected to the system bus 908 by a hard disk drive interface 924 , a magnetic disk drive interface 926 and an optical drive interface 928 , respectively.
  • the interface 924 for external drive implementations includes at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies. Other external drive connection technologies are within contemplation of the subject innovation.
  • the drives and their associated computer-readable media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth.
  • the drives and media accommodate the storage of any data in a suitable digital format.
  • computer-readable media refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD, it should be appreciated by those skilled in the art that other types of media which are readable by a computer, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the exemplary operating environment, and further, that any such media may contain computer-executable instructions for performing the methods of the innovation.
  • a number of program modules can be stored in the drives and RAM 912 , including an operating system 930 , one or more application programs 932 , other program modules 934 and program data 936 . All or portions of the operating system, applications, modules, and/or data can also be cached in the RAM 912 . It is appreciated that the innovation can be implemented with various commercially available operating systems or combinations of operating systems.
  • a user can enter commands and information into the computer 902 through one or more wired/wireless input devices, e.g., a keyboard 938 and a pointing device, such as a mouse 940 .
  • Other input devices may include a microphone, an IR remote control, a joystick, a game pad, a stylus pen, touch screen, or the like.
  • These and other input devices are often connected to the processing unit 904 through an input device interface 942 that is coupled to the system bus 908 , but can be connected by other interfaces, such as a parallel port, an IEEE 1394 serial port, a game port, a USB port, an IR interface, etc.
  • a monitor 944 or other type of display device is also connected to the system bus 908 via an interface, such as a video adapter 946 .
  • a computer typically includes other peripheral output devices (not shown), such as speakers, printers, etc.
  • the computer 902 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer(s) 948 .
  • the remote computer(s) 948 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 902 , although, for purposes of brevity, only a memory/storage device 950 is illustrated.
  • the logical connections depicted include wired/wireless connectivity to a local area network (LAN) 952 and/or larger networks, e.g., a wide area network (WAN) 954 .
  • LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet.
  • the computer 902 When used in a LAN networking environment, the computer 902 is connected to the local network 952 through a wired and/or wireless communication network interface or adapter 956 .
  • the adapter 956 may facilitate wired or wireless communication to the LAN 952 , which may also include a wireless access point disposed thereon for communicating with the wireless adapter 956 .
  • the computer 902 can include a modem 958 , or is connected to a communications server on the WAN 954 , or has other means for establishing communications over the WAN 954 , such as by way of the Internet.
  • the modem 958 which can be internal or external and a wired or wireless device, is connected to the system bus 908 via the serial port interface 942 .
  • program modules depicted relative to the computer 902 can be stored in the remote memory/storage device 950 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used.
  • the computer 902 is operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone.
  • any wireless devices or entities operatively disposed in wireless communication e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone.
  • the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices.
  • Wi-Fi Wireless Fidelity
  • Wi-Fi is a wireless technology similar to that used in a cell phone that enables such devices, e.g., computers, to send and receive data indoors and out; anywhere within the range of a base station.
  • Wi-Fi networks use radio technologies called IEEE 802.11 (a, b, g, etc.) to provide secure, reliable, fast wireless connectivity.
  • IEEE 802.11 a, b, g, etc.
  • a Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE 802.3 or Ethernet).
  • Wi-Fi networks operate in the unlicensed 2.4 and 5 GHz radio bands, at an 11 Mbps (802.11a) or 54 Mbps (802.11b) data rate, for example, or with products that contain both bands (dual band), so the networks can provide real-world performance similar to the basic 10 BaseT wired Ethernet networks used in many offices.
  • the system 1000 includes one or more client(s) 1002 .
  • the client(s) 1002 can be hardware and/or software (e.g., threads, processes, computing devices).
  • the client(s) 1002 can house cookie(s) and/or associated contextual information by employing the innovation, for example.
  • the system 1000 also includes one or more server(s) 1004 .
  • the server(s) 1004 can also be hardware and/or software (e.g., threads, processes, computing devices).
  • the servers 1004 can house threads to perform transformations by employing the innovation, for example.
  • One possible communication between a client 1002 and a server 1004 can be in the form of a data packet adapted to be transmitted between two or more computer processes.
  • the data packet may include a cookie and/or associated contextual information, for example.
  • the system 1000 includes a communication framework 1006 (e.g., a global communication network such as the Internet) that can be employed to facilitate communications between the client(s) 1002 and the server(s) 1004 .
  • a communication framework 1006 e.g., a global communication network such as the Internet
  • Communications can be facilitated via a wired (including optical fiber) and/or wireless technology.
  • the client(s) 1002 are operatively connected to one or more client data store(s) 1008 that can be employed to store information local to the client(s) 1002 (e.g., cookie(s) and/or associated contextual information).
  • the server(s) 1004 are operatively connected to one or more server data store(s) 1010 that can be employed to store information local to the servers 1004 .

Abstract

A multi-modal search system (and corresponding methodology) is provided. The system employs text, speech, touch and gesture input to establish a search query. Additionally, a subset of the modalities can be used to obtain search results based upon exact or approximate matches to a search result. For example, wildcards, which can either be triggered by the user or inferred by the system, can be employed in the search.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Patent application Ser. No. 61/053,214 entitled “MULTI-MODALITY SEARCH INTERFACE” and filed May 14, 2008. This application is related to pending U.S. patent application Ser. No. _______ entitled “MULTI-MODAL QUERY REFINEMENT” filed on ______ and to pending U.S. patent application Ser. No. ______ entitled “MULTI-MODAL SEARCH WILDCARDS” filed on ______. The entireties of the above-noted applications are incorporated by reference herein.
  • BACKGROUND
  • The Internet continues to make available ever-increasing amounts of information which can be stored in databases and accessed therefrom. With the proliferation of mobile and portable terminals (e.g., cellular telephones, personal data assistants (PDAs), smartphones and other devices), users are becoming more mobile, and hence, more reliant upon information accessible via the Internet. Accordingly, users often search network sources such as the Internet from their mobile device.
  • There are essentially two phases in an Internet search. First, a search query is constructed that can be submitted to a search engine. Second the search engine matches this search query to actual search results. Conventionally, these search queries were constructed merely of keywords that were matched to a list of results based upon factors such as relevance, popularity, preference, etc.
  • The Internet and the World Wide Web continue to evolve rapidly with respect to both volume of information and number of users. As a whole, the Web provides a global space for accumulation, exchange and dissemination of information. As mobile devices become more and more commonplace to access the Web, the number of users continues to increase.
  • In some instances, a user knows the name of a site, server or URL (uniform resource locator) to the site or server that is desired for access. In such situations, the user can access the site, by simply typing the URL in an address bar of a browser to connect to the site. Oftentimes, the user does not know the URL and therefore has to ‘search’ the Web for relevant sources and/or URL's. To maximize likelihood of locating relevant information amongst an abundance of data, Internet or web search engines are regularly employed.
  • Traditionally, to locate a site or corresponding URL of interest, the user can employ a search engine to facilitate locating and accessing sites based upon alphanumeric keywords and/or Boolean operators. In aspects, these keywords are text- or speech-based queries, although, speech is not always reliable. Essentially, a search engine is a tool that facilitates web navigation based upon textual (or speech-to-text) entry of a search query usually comprising one or more keywords. Upon receipt of a search query, the search engine retrieves a list of websites, typically ranked based upon relevance to the query. To enable this functionality, the search engine must generate and maintain a supporting infrastructure.
  • Upon textual entry of one or more keywords as a search query, the search engine retrieves indexed information that matches the query from an indexed database, generates a snippet of text associated with each of the matching sites and displays the results to the user. The user can thereafter scroll through a plurality of returned sites to attempt to determine if the sites are related to the interests of the user. However, this can be an extremely time-consuming and frustrating process as search engines can return a substantial number of sites. More often than not, the user is forced to narrow the search iteratively by altering and/or adding keywords and Boolean operators to obtain the identity of websites including relevant information, again by typing (or speaking) the revised query.
  • Conventional computer-based search, in general, is extremely text-centric (pure text or speech-to-text) in that search engines typically analyze content of alphanumeric search queries in order to return results. These traditional search engines merely parse alphanumeric queries into ‘keywords’ and subsequently perform searches based upon a defined number of instances of each of the keywords in a reference.
  • Currently, users of mobile devices, such as smartphones, often attempt to access or ‘surf’ the Internet using keyboards or keypads such as, a standard numeric phone keypad, a soft or miniature QWERTY keyboard, etc. Unfortunately, these input mechanisms are not always efficient for the textual input to efficiently search the Internet. As described above, conventional mobile devices are limited to text input to establish search queries, for example, Internet search queries. Text input can be a very inefficient way to search, particularly for long periods of time and/or for very long queries.
  • SUMMARY
  • The following presents a simplified summary of the innovation in order to provide a basic understanding of some aspects of the innovation. This summary is not an extensive overview of the innovation. It is not intended to identify key/critical elements of the innovation or to delineate the scope of the innovation. Its sole purpose is to present some concepts of the innovation in a simplified form as a prelude to the more detailed description that is presented later.
  • The innovation disclosed and claimed herein, in one aspect thereof, comprises a search system and corresponding methodologies that can couple speech, text and touch for search interfaces and engines. In other words, rather than being completely dependent upon conventional textual input, the innovation can combine speech, text, and touch to enhance usability and efficiency of search mechanisms. Accordingly, it can be possible to locate more meaningful and comprehensive results as a function of a search query.
  • In aspects, a multi-modal search management system employs a query administration component to analyze multi-modal input (e.g., text, speech, touch) and to generate appropriate search criteria. Accordingly, comprehensive and meaningful search results can be gathered. The features of the innovation can be incorporated into a search engine or, alternatively, can work in conjunction with a search engine.
  • In other aspects, the innovation can be incorporated or retrofitted into existing search engines and/or interfaces. Yet other aspects employ the features, functionalities and benefits of the innovation in mobile search applications, which has strategic importance given the increasing usage of mobile devices as a primary computing device. As described above, mobile devices are not always configured or equipped with full-function keyboards, thus, the multi-modal functionality of the innovation can be employed to greatly enhance comprehensiveness of search.
  • In yet another aspect thereof, machine learning and reasoning is provided that employs a probabilistic and/or statistical-based analysis to prognose or infer an action that a user desires to be automatically performed.
  • To the accomplishment of the foregoing and related ends, certain illustrative aspects of the innovation are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the innovation can be employed and the subject innovation is intended to include all such aspects and their equivalents. Other advantages and novel features of the innovation will become apparent from the following detailed description of the innovation when considered in conjunction with the drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an example block diagram of a system that establishes a query from a multi-modal input in accordance with aspects of the innovation.
  • FIG. 2 illustrates an example user interface in accordance with an aspect of the innovation.
  • FIG. 3 illustrates an example of a typical speech recognition system in accordance with an aspect of the innovation.
  • FIG. 4 illustrates an alternative example block diagram of a speech recognition system in accordance with an aspect of the innovation.
  • FIG. 5 illustrates an example flow chart of procedures that facilitate generating a query from a multi-modal input in accordance with an aspect of the innovation.
  • FIG. 6 illustrates an example flow chart of procedures that facilitate analyzing a multi-modal input in accordance with an aspect of the innovation.
  • FIG. 7 illustrates an example block diagram of a query administration component in accordance with an aspect of the innovation.
  • FIG. 8 illustrates an example analysis component in accordance with an aspect of the innovation.
  • FIG. 9 illustrates a block diagram of a computer operable to execute the disclosed architecture.
  • FIG. 10 illustrates a schematic block diagram of an exemplary computing environment in accordance with the subject innovation.
  • DETAILED DESCRIPTION
  • The innovation is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject innovation. It may be evident, however, that the innovation can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the innovation.
  • As used in this application, the terms “component” and “system” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers.
  • As used herein, the term to “infer” or “inference” refer generally to the process of reasoning about or inferring states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
  • While certain ways of displaying information to users are shown and described with respect to certain figures as screenshots, those skilled in the relevant art will recognize that various other alternatives can be employed. The terms “screen,” “web page,” and “page” are generally used interchangeably herein. The pages or screens are stored and/or transmitted as display descriptions, as graphical user interfaces, or by other methods of depicting information on a screen (whether personal computer, PDA, mobile telephone, or other suitable device, for example) where the layout and information or content to be displayed on the page is stored in memory, database, or another storage facility.
  • Conventional voice-enabled search applications encourage users to “just say what you want” in order to obtain useful content such as automated directory assistance (ADA) via a mobile device. Unfortunately, when users only remember part of what they are looking for, they are forced to guess, even though what they know may be sufficient to retrieve the desired information. Additionally, oftentimes, quality of the voice recognition is impaired by background noise, speaker accents, speaker clarity, quality of recognition applications or the like.
  • The innovation discloses systems (and corresponding methodologies) that expand the conventional capabilities of voice-activated search to allow users to explicitly constrain the recognition results to match the queery by supplementing the speech with additional criteria, for example, to provide partial knowledge in the form of text hints. In doing so, a multi-modal approach is presented which incorporates voice with text, touch, etc. This multi-modal functionality enables users more accurately access desired information.
  • In aspects, the innovation discloses a multi-modal search interface that tightly couples speech, text and touch by utilizing regular expression queries that employ ‘wildcards,’ where parts of the query can be input via different modalities. For instance, modalities such as speech, text, touch and gestures can be used at any point in the query construction process. In other aspects, the innovation can represent uncertainty in a spoken recognized result as wildcards in a regular expression query. In yet other aspects, the innovation allows users to express their own uncertainty about parts of their utterance using expressions such as “something” or “whatchamacallit” which can then be translated into or interpreted as wildcards.
  • Referring initially to the drawings, FIG. 1 illustrates an example block diagram of a system 100 that employs a multi-modal search management system 102 to construct meaningful search results based upon a multi-modal input query. It is to be understood that, as used herein, ‘multi-modal’ can refer to most any combination of text, voice, touch, gestures, etc. While examples described herein are directed to a specific multi-modal example that employs text, voice and touch only, it is to be understood that other examples exist that employ a subset of these identified modalities. As well, it is to be understood that other examples exist that employ disparate modalities in combination with or separate from those described herein. For instance, other examples can employ gesture input, pattern recognition, among others to establish a search query. Similarly, while the examples are directed to mobile device implementations, it is to be understood that the features, functions and benefits of the innovation can be applied to most any computing experience, platform and/or device without departing from the spirit and scope of this disclosure and claims appended hereto.
  • As shown the multi-modal search management component 102 can include a query administration component 104 and a search engine component 106. Essentially, these subcomponents (104, 106) enable a user to establish a query using multiple modalities and to search for data and other resources using the multi-modal query, for example, a query constructed using text, voice, touch, gestures, etc. Features, functions and benefits of the innovation will be described in greater detail below.
  • Internet usage, especially via mobile devices, continues to grow as users seek anytime, anywhere access to information. Because users frequently search for businesses, directory assistance has recently been the focus of conventional voice search applications utilizing speech as the primary input modality. Unfortunately, mobile scenarios often contain noise which degrades performance of speech recognition functionalities. Thus, the innovation presents a multi-modal search management system 102 which can employ user interfaces (UIs) that not only can facilitate touch and text whenever speech fails, but also allows users to assist the speech recognizer via text hints.
  • Continuing with the ADA example from above, in generating a search query, the innovation can also take advantage of most any partial knowledge users may have about a business listing by letting them express their uncertainty in a simple, intuitive manner. In simulation experiments conducted on real voice search data, leveraging multi-modal refinement resulted in a 28% relative reduction in error rate. Providing text hints along with the spoken utterance resulted in even greater relative reduction, with dramatic gains in recovery for each additional character.
  • As can be appreciated, according to market research, mobile devices are believed to be poised to rival desktop and laptop PCs (personal computers) as a dominant Internet platform, providing users with anytime, anywhere access to information. One common request for information is the telephone number or address of local businesses. Because perusing a large index of business listings can be a cumbersome affair using existing mobile text and touch input mechanisms, directory assistance has been emerged as a focus of voice search applications, which utilize speech as the primary input modality. Unfortunately, mobile environments pose problems for speech recognition, even for native speakers. First, mobile settings often contain non-stationary noise which cannot be easily cancelled or filtered. Second, speakers tend to adapt to surrounding noise in acoustically unhelpful ways. Under such adverse conditions, task completion for voice search is less than stellar, especially in the absence of an effective correction user interface for dealing with speech recognition errors.
  • In operation, the query administration component 104 can receive multi-modal input(s), generate an appropriate query and instruct the search engine component 106 accordingly. As will be understood upon a review of the figures and discussions that follow, the query administration component 104 enables one modality to be supplemented with another thereby enhancing interpretation and ease of use in locating meaningful search results. In one example, speech input can be supplemented with textual hints (e.g., a beginning letter of a word) to enhance recognition accuracy. Similarly, textual input can be supplemented with speech to enhance scope of a search query. Still further, system generated and user prompted wildcards can be used to facilitate, improve, increase or boost functionality.
  • In view of the challenges of conventional voice search approaches, especially mobile voice search, the multi-modal search management system 102 can generate (or otherwise employ) a UI as illustrated in FIG. 2. The multi-modal UI tightly couples speech with touch and text in at least two directions; users can not only use touch and text to clarify, supplement or generate their queries whenever recognition of speech is not sufficiently reliable, but they can also use speech whenever text entry becomes burdensome. Additionally, the innovation enables leverage of this tight coupling by transforming a typical n-best list, or a list of phrase alternates from the recognizer, into a palette of words with which users can compose and refine queries, e.g., as described in the Related Application identified above.
  • The innovation can also take advantage of most any partial knowledge users may have about the words, e.g., of the business listing. For example, a user may only remember that the listing starts with an “s” and also contains the word “avenue”. Likewise, the user may only remember “Saks something,” where the word “something” is used to express uncertainty about what words follow. While the word ‘something’ is used in the aforementioned example, it is to be appreciated that most any desired word or indicator can be used without departing from the spirit/scope of the innovation and claims appended hereto. The innovation represents this uncertainty as wildcards in an enhanced regular expression search of the listings, which exploits the popularity of the listings.
  • This disclosure is focused on three phases. First, a description of the system 100 architecture together with a contrast against a typical architecture of conventional voice search applications. The specification also details the backend search infrastructure deployed for fast and efficient retrieval. Second, the disclosure presents an example UI that highlights the innovation's tightly coupled multi-modal generation capabilities and support of partial knowledge with several user scenarios.
  • It is to be understood that the ADA examples described herein are included to provide perspective to the features, functions and benefits of the innovation and are not intended to limit the scope of the disclosure and appended claims in any manner. The following ADA example references an implementation where users can request telephone or address information of residential and business listings using speech recognition via a network (e.g., Internet) equipped mobile device (e.g., smartphone, cell phone, personal digital assistant, personal media player, navigation system, pocket PC . . . ). As will be appreciated, with increased use of Internet-capable mobile communication devices, ADA is a growing industry with over 30 million U.S. callers per month. Many voice search applications focus exclusively on telephony-based ADA. However, more recent applications have migrated onto other mobile devices, providing users with a rich client experience which includes, among other services, maps and driving directions in addition to ADA. Whether users call ADA or use a data channel to send utterances, the speech recognition task is most always dispatched to speech servers, due to the fact that decoding utterances for large domains with many choices (e.g., high perplexity domains) requires sufficient computational power, which to date does not exist on mobile devices. However, it is to be appreciated that the features, functions and benefits of the innovation can be employed in connection with any data or electronic search including, but not limited to, Internet and intranet searching embodiments.
  • Returning to the ADA example, because there are currently over 18 million listings in the U.S. Yellow Pages alone, and users frequently may not use the exact name of the listing as found in the directory (e.g., “Maggiano's Italian Restaurant” instead of “Maggiano's Little Italy”), grammar-based recognition approaches that rely on lists fail to scale properly. As such, approaches to ADA have focused on combing speech recognition with information retrieval techniques.
  • As described supra, voice search applications encourage users to “just say what you want” in order to obtain useful mobile content such as ADA. Unfortunately, when users only remember part of what they are looking for, they are forced to guess, even though what they know may be sufficient to retrieve the desired information. In this disclosure, it is proposed to expand the capabilities of voice search to allow users to explicitly express their uncertainties as part of their queries, and as such, to provide partial knowledge. Applied to ADA, the disclosure highlights the enhanced user experience uncertain expressions affords and delineates how to perform language modeling and information retrieval.
  • Voice search applications encourage users to “just say what you want” in order to obtain useful mobile content such as business listings, driving directions, movie times, etc. Because certain types of information require recognition of a large database of choices, voice search is often formulated as both a recognition and information retrieval (IR) task, where a spoken utterance is first converted into text and then used as a search query for IR. ADA exemplifies the challenges of voice search. Not only are there millions of possible listings (e.g., 18 million in the US alone), but users also do not frequently know, remember, or say the exact business names as listed in the directory. As illustrated in FIG. 2, in some cases, users think they know but are mistaken (e.g., “Le Sol Spa” for the listing “Le Soleil Tanning and Spa”). In other cases, they remember only part of the name with certainty (e.g., listing starts with “Le” and contains the word “Spa”). In these cases, what they remember may actually be sufficient to find the listing. Unfortunately, in current voice search applications, users are forced to guess and whatever partial knowledge they could have provided is lost.
  • In this specification, the innovation enables expansion of the capabilities of voice search to enable users to explicitly express their uncertainties as part of their queries, and as such, to allow systems to leverage most any partial knowledge contained in those queries.
  • Voice search applications with a UI as shown in FIG. 2 can offer even richer user experiences. In accordance with the example multi-modal interface, the innovation displays not only the top matches for uncertain expressions, but also the query itself for users to edit, for example, in case they wanted to refine their queries using text as set for the in the Related Application identified above. FIG. 2 illustrates a screenshot of results for the spoken utterance “Le S Something Spa”, from the previous example, as well the more general expression “Le Something Spa”. Note that the system not only retrieved exact matches for the utterances as a regular expression query, but also approximate matches.
  • As discussed earlier, the innovation's approach to voice search involve recognition plus IR. For ADA recognition, n-gram statistical language models are typically used to compress and generalize across listings as well as their observed user variations. In order to support n-gram recognition of uncertain expressions, The training data can be modified. Given that not enough occurrences of the word “something” appeared in the training sentences for it to be accurately recognized (e.g., 88), that number was boosted artificially by creating pseudo-listings from the original data. For every listing which was not a single word (e.g., “Starbucks”), the innovation adds new listings with “*” and “i-*” replacing individual words, where i denotes the initial letter of the word being replaced. For listings with more than two words, because people tend to remember either the first or last word of a listing, the innovation can focus on replacing interior words. Furthermore, to preserve counts for priors, 4 new listings (and 4 duplicates for single word listings) were added. For example, for the listing “Le Soleil Tanning and Spa”, “Le *”, “Le S*”, “* Spa”, and “T* Spa” were generated. Although this approach of adding new listings with words replaced by “*” and “i-*” is a heuristic, it was found that it facilitated adequate bigram coverage. Finally, the pronunciation dictionary was modified so that “*” could be recognized as “something”.
  • The advantage of this approach is at least two-fold. First, because the innovation replaced words with “*” and “i-*” instead of the word “something” and avoids conflicts with businesses that had “something” as part of their name (only 9 in the Seattle area). Second, by having the recognition produce wildcards it is possible to treat the recognized result in its very condition as a regular expression for search.
  • Turning to a discussion of information retrieval, after obtaining a regular expression from the recognizer (e.g., “Le * Spa”), an index and retrieval algorithm can be used that could quickly find likely matches for the regular expression. This is accomplished by encoding the directory listing as a k-best suffix array. Because a k-best suffix array is sorted by both lexicographic order and most any figure of merit, such as the popularity of listings in the call logs, it is a convenient data structure for finding the most likely, or in this case, the most popular matches for a substring, especially when there could be many matches. For example, for the query “H* D*”, the k-best suffix array would quickly bring up “Home Depot” as the top match. Furthermore, because lookup time for finding the k most popular matches is close to O(log N) for most practical situations with a worst case guarantee of O(sqrt N), where N is the number of characters in the listings, user experience did not suffer from any additional retrieval latencies. Note that before any regular expression was submitted as a search query, a few simple heuristics were applied to clean it up (e.g., consecutive wildcards were collapsed into 1 wildcard).
  • Besides regular expression queries using a k-best suffix array, which provides popular exact matches to the listings, it is also useful to also obtain approximate matches. For this purpose, an improved term frequency can be implemented—e.g., inverse document frequency (TFIDF) algorithm. Because statistical language models can produce garbled output, voice search typically utilizes approximate search techniques, such as TFIDF, because they treat the output as just a bag of words. This is advantageous when users either incorrectly remember the order of words in a listing, or add spurious words. In some ways, the two IR methods are flip sides of each other. The strength of finding exact matches is that the innovation can leverage most any partial knowledge users may have about their queries (e.g., word order) as well as the popularity of any matches. Its weakness is that it assumes users are correct about their partial knowledge. On the other hand, this is the strength of finding approximate matches; it is indifferent to word order and other mistakes users often make.
  • FIG. 3 displays an example architecture for typical voice search applications. As illustrated, first, an utterance can be recognized using an n-gram statistical language model (SLM) that compresses and generalizes across training sentences. In the case of ADA, the training sentences comprise not only the exact listings and business categories but also alternative expressions for those listings. Because an n-gram is based on word collocation probabilities, the output of the speech recognizer is an n-best list containing phrases that may or may not match any of the training sentences. This is often acceptable if the phrases are submitted to an information retrieval (IR) engine that utilizes techniques which treat the phrases as just bags of words.
  • The IR engine (or search engine) retrieves matches from an index, which is typically a subset of the language model training sentences, such as the exact listings along with their categories. In the example architecture, if an utterance is recognized with high confidence, it is immediately sent to the IR engine to retrieve the best matching listing. However, if an utterance is ambiguous in any way, as indicated for example by medium to low confidence scores, voice search applications with a graphical UI very often display an n-best list to users for selection, at which point users can either select a result (e.g., phrase) or retry their utterance.
  • In contrast to the voice search architecture of FIG. 3, FIG. 4 illustrates an alternative example system architecture in accordance with the innovation. It is to be understood that the ‘Search Vox’ component illustrated in FIG. 4 is analogous to the multi-modal management system 102 of FIG. 1. As shown in FIG. 4, first, high confidence results immediately go to the IR engine. Second, users are shown the n-best list, though the interaction dynamics are fundamentally different than that of conventional systems. In accordance with the innovation, if subsequent refinement is desired, e.g., as set forth in the Related Application referenced above, users can not only select a phrase from the n-best list, but also the individual words which make up those phrases thereby refining search results by way of effectively drilling into a set of search results.
  • The n-best list is essentially treated as a sort of word palette or ‘bag of words’ from which users can select out those words that the speech recognizer heard or interpreted correctly, though they may appear in a different phrase. For example, suppose a user says “home depot,” but because of background noise, the phrase does not occur in the n-best list. Suppose, however, that the phrase “home office design” appears in the list. With typical (or conventional) voice search applications, the user would have to start over.
  • In accordance with the innovation, the user can simply select the word “home” and invoke the backend which finds the most popular listings that contain the word. For instance, the system can measure popularity by the frequency with which a business listing appears in the ADA call logs, for example, for Live Local Search. In order to retrieve the most popular listings that contain a particular word or substring, regular expressions can be used.
  • Because, in aspects, much of the effectiveness of the innovation's interface rests on its ability to retrieve listings using a wildcard query—or a regular expression query containing wildcards—a discussion follows that describes implementation of a RegEx engine followed by further details about wildcard queries constructed in the RegEx generator. Essentially, in operation, the RegEx generator and RegEx engine facilitate an ability to employ wildcards in establishing search queries.
  • It will be understood that the components of FIG. 4 can be deployed within the higher level components of FIG. 1, e.g., multi-modal search management system 102, query administration component 104 and search engine component 106. Three other sub-components of the system architecture are discussed below: the IR engine, the supplement generator, and the list filter (FIG. 4).
  • Turning first to a discussion of the RegEx engine, the index data structure chosen to use for regular expression matching can be based upon k-best suffix arrays. Similar to traditional suffix arrays, k-best suffix arrays arrange all suffixes of the listings into an array. While traditional suffix arrays arrange the suffixes in lexicographical order only, the k-best suffix arrays of the innovation arrange the suffixes according to two alternating orders—a lexicographical ordering and an ordering based on a figure of merit, such as popularity, preference, etc. The arrangement of the array borrows from ideas seen in the construction of KD-trees.
  • Because the k-best suffix array is sorted by both lexicographic order and popularity, it is a convenient structure for finding the most popular matches for a substring, especially when there are many matches. In an aspect, the k most popular matches can be found in time close to O(log N) for most practical situations, and with a worst case guarantee of O(sqrt N), where N is the number of characters in the listings. In contrast, a standard suffix array enables locating most all matches to a substring in O(log N) time, but does not impose any popularity ordering on the matches. To find the most popular matches, the user would have to traverse them all.
  • Consider a simple example which explains why this subtle difference is important to the application. The standard suffix array may be sufficiently fast when searching for the k-best matches to a large substring since there will not be many matches to traverse in this case. The situation is, however, completely different for a short substring such as, for example, ‘a’. In this case, a user would have to traverse all dictionary entries containing an ‘a’, which is not much better than traversing all suffixes in the listings—in O(N) time. With a clever implementation, it is possible to continue a search in a k-best suffix array from the position it was previously stopped. A simple variation of k-best suffix matching will therefore allow look up of the k-best (most popular) matches for an arbitrary wildcard query, such as, for instance ‘f* m* ban*’. The approach proceeds as the k-best suffix matching above for the largest substring without a wildcard (‘ban’). At each match, the innovation now evaluates the full wildcard query against the full listing entry for the suffix and continues the search until k valid expansions to the wildcard query are found.
  • The k-best suffix array can also be used to exclude words in the same way by continuing the search until expansions without the excluded words are found. The querying process is an iterative process, which gradually eliminates the wildcards in the text string. Whenever the largest substring in the wildcard query does not change between iterations, there is an opportunity to further improve the computational efficiency of the expansion algorithm. In this case, the k-best suffix matching can just be continued from the point where the previous iteration ended.
  • With an efficient k-best suffix array matching algorithm available for the RegEx engine, it can be deployed, for example onto a mobile device, because of the latencies associated with sending information back and forth along a wireless data channel. Speech recognition for ADA already takes several seconds to return an n-best list. It is desirable to provide short latencies for wildcard queries—the innovation is capable of enhancing (or shortening) the latencies.
  • Turning now to a discussion of the IR engine, besides wildcard queries, which provide exact matches to the listings, it is useful to also retrieve approximate matches to the listings. For at least this purpose, the innovation implements an IR engine based on an improved term frequency—inverse document frequency (TFIDF) algorithm. What is important to note about the IR engine is that it can treat queries and listings as bags of words. This is advantageous when users either incorrectly remember the order of words in a listing, or add additional words that do not actually appear in a listing. This is not the case for the RegEx engine where order and the presence of suffixes in the query matter.
  • Referring now to the RegEx generator, returning to the example in which a user selects the word “home” for “home depot” from a word palette, once the user invokes the backend, the word is sent as a query to a RegEx generator which transforms it into a wildcard query. For single phrases, the generator can simply insert wildcards before spaces, as well as to the end of the entire query. For example, for the query “home”, the generator could produce the regular expression “home*”.
  • For a list of phrases, such as an n-best list from the recognizer, the RegEx or wildcard generator uses minimal edit distance (with equal edit operation costs) to align the phrases at the word level. Once words are aligned, minimal edit distance is again applied to align the characters. Whenever there is disagreement between any aligned words or characters, a wildcard is substituted in its place. For example, for an n-best list containing the phrases “home depot” and “home office design,” the RegEx generator would produce “home * de*”. After an initial query is formulated, the RegEx generator applies heuristics to clean up the regular expression (e.g., in an aspect, no word can have more than one wildcard) before it is used to retrieve k-best matches from the RegEx engine. The RegEx generator is invoked in this form whenever speech is utilized, such as for leveraging partial knowledge.
  • Turning now to the supplement generator of FIG. 4, as discussed earlier, the innovation's interface treats a list of phrases as a word palette. Because the word palette is most useful when it is filled with words to choose from, whenever the recognizer produces a short n-best list with less phrases than can appear in the user interface (which for a pocket PC interface is most often 8 items as shown in FIG. 2), or whenever a no-speech query has been submitted (e.g., “home*” in the previous example), it is the job of the supplement generator (FIG. 4) to retrieve matches from the backend for the UI.
  • Currently, the supplement generator attempts to find exact matches from the RegEx engine first since it will be obvious to users why they were retrieved. Space permitting, approximate matches are also retrieved from the IR engine. This can be accomplished in the following manner: If any exact matches have already been found, the supplement generator will use those exact matches as queries to the IR engine until enough matches have been retrieved. If there are no exact matches, the supplement generator will use whatever query was submitted to the RegEx generator as the query.
  • Finally, the List filter simply uses a wildcard query to filter out an n-best list obtained from the speech recognizer. In operation, the List filter is used primarily for text hints, which are discussed infra.
  • As discussed in the previous section, the innovation can display an n-best list to the user, making an interface (e.g., UI of FIG. 2) appear, at least at first blush, like any other voice search application. This aspect facilitates a default correction mechanism users may expect of speech applications; namely, that when their utterances fail to be correctly recognized, they may still select from a list of choices, provided that their utterance exists among these choices. However, because re-speaking does not generally increase the likelihood that the utterance will be recognized correctly, and furthermore, because mobile usage poses distinct challenges not encountered in desktop settings, the interface endows users with a larger arsenal of recovery strategies—for example, text hints, word selection from a word palette or bag of words, etc.
  • FIG. 5 illustrates a methodology of generating a multi-modal query in accordance with an aspect of the innovation. While, for purposes of simplicity of explanation, the one or more methodologies shown herein, e.g., in the form of a flow chart, are shown and described as a series of acts, it is to be understood and appreciated that the subject innovation is not limited by the order of acts, as some acts may, in accordance with the innovation, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the innovation.
  • At 502, a multi-modal input is received, for example, by way of the UI of FIG. 2. In operation, multi-modal input can include text, speech, touch, gesture, etc. input. While examples are described herein, it is to be understood that the multi-modal input can render and employ UIs that are capable of receiving most any protocol combination. Additionally, the inputs can be received at different timings as appropriate.
  • At 504, the multi-modal input is analyzed to interpret the input. For instance, text can be parsed, speech can be converted, etc. An appropriate search query can be generated at 506. In other words, as a result of the analysis, a search query can be established to increase accuracy and meaningfulness of results. As shown, results in accordance with the search query can be obtained at 508.
  • Referring now to FIG. 6, there is illustrated a methodology of generating a search query in accordance with the innovation. At 602, a multi-modal input is received, for example, text, speech, touch, gesture, etc. At 604, a determination is made to conclude if the document includes text data. If so, at 606, the data can be parsed and analyzed to determine keywords, text hints and/or context of the text. Additionally, a determination can be made if wildcards should be used to effect a query.
  • Similarly, at 608, a determination can be made to conclude if the document includes audible data. If the document includes audible data, at 610, speech recognition (or other suitable sound analysis) mechanisms can be used to establish keywords associated with the audible data and subsequently the context of the keywords in view of the other input(s) as appropriate.
  • Still further, at 612, a determination is made if the document contains gesture-related data. As with text and sound described above, if gestures were used to input, an evaluation can be effected at 614. For instance, if the gesture was intended to identify a specific number of words, this criterion can be established at 614.
  • Once the data is analyzed (e.g., 604-614), at 616, a search query can be generated. Here, wildcards can be used as appropriate to establish a comprehensive search query. Additionally, as described above, TFIDF algorithms can be applied where appropriate. Still further, other logic and inferences can be made to establish user intent based upon the multi-modal input thereby establishing a comprehensive query that can be used to fetch meaningful search results.
  • Turning now to FIG. 7, an example block diagram of query administration component 104 is shown. Generally, the query administration component 104 can include a query generation component 702 and an analysis component 704. Together these sub-components (702, 704) facilitate transformation of a multi-modal input into a comprehensive search query.
  • The query generation component 702 employs input from the analysis component 704 to establish an ample and comprehensive search query that will produce results in line with intentions of the user input. As described in connection with the aforementioned methodologies, the innovation can evaluate the multi-modal input. In operation, the analysis component 704 can be employed to effect this evaluation. Logic can be employed in connection with the analysis component 704 to effect the evaluation.
  • FIG. 8 illustrates an example block diagram of an analysis component 704. As shown, the analysis component 704 can include a text evaluation component 802, a speech evaluation component 804 and a gesture evaluation component 806, all of which are capable of evaluating multi-modal input in efforts to establish comprehensive search queries. While specific modality evaluation components are shown in FIG. 8 (802, 804, 806), it is to be understood that alternative aspects can include other evaluation components without departing from the spirit and/or scope of the innovation.
  • As illustrated, a logic component 808 can be employed to effect the evaluation and/or interpretation of the input. In aspects, logic component 808 can include rules-based and/or inference-based (e.g., machine learning and reasoning (MLR)) logic. This logic essentially enables the multi-modal input to be interpreted or construed to align with the intent of the raw input (or portions thereof).
  • As stated above, the innovation can employ MLR which facilitates automating one or more features in accordance with the subject specification. The subject innovation (e.g., in connection with input interpretation or query generation) can employ various MLR-based schemes for carrying out various aspects thereof. For example, a process for determining an intention or interpretation based upon a speech input can be facilitated via an automatic classifier system and process.
  • A classifier is a function that maps an input attribute vector, x=(x1, x2, x3, x4, xn), to a confidence that the input belongs to a class, that is, f(x)=confidence(class). Such classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed.
  • A support vector machine (SVM) is an example of a classifier that can be employed. The SVM operates by finding a hypersurface in the space of possible inputs, which the hypersurface attempts to split the triggering criteria from the non-triggering events. Intuitively, this makes the classification correct for testing data that is near, but not identical to training data. Other directed and undirected model classification approaches include, e.g., naïve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.
  • As will be readily appreciated from the subject specification, the subject innovation can employ classifiers that are explicitly trained (e.g., via a generic training data) as well as implicitly trained (e.g., via observing user behavior, receiving extrinsic information). For example, SVM's are configured via a learning or training phase within a classifier constructor and feature selection module. Thus, the classifier(s) can be used to automatically learn and perform a number of functions, including but not limited to determining according to a predetermined criteria how to interpret an input, how to establish a query, etc.
  • Below, user scenarios are highlighted that demonstrate two concepts: first, tight coupling of speech with touch and text, so that whenever one of the three modalities fails or becomes burdensome, users may switch to another modality in a complementary way; second, leveraging of most any partial knowledge a user may have about the constituent words of their intended query.
  • Turning to a discussion of query generation using a word palette. In accordance with the innovation, a user can select words by way of a touch screen thereby establishing a search query. Additionally, the selected words can be chosen (or otherwise identified) for inclusion or alternatively, exclusion, from a set of search results. In other words, a selection can be used as a filter to screen out results that contain a particular word or set of words. Moreover, a selection can be supplemented with speech (or other modality) thereby enhancing the searching capability of the innovation. While many of the examples described herein are directed to selection of words from an n-best list, it is to be understood that the innovation can treat most any display rendering as a bag of words thereby enabling selection to enhance comprehensive searching and query construction.
  • As stated supra, the innovation can support query generation via multi-modal input by combining speech with text hints. Just in the way that users can resort to touch and text when speech fails, they can also resort to speech whenever typing becomes burdensome, or when they feel they have provided enough text hints for the recognizer to identify their query.
  • In an example, the user starts typing “m” for the intended query “mill creek family practice,” but because the query is too long to type, the user utters the intended query after pressing a trigger or specific functional soft key button. After the query returns from the backend, all choices in the list now start with an “m” and indeed include the user utterance may be displayed.
  • The innovation can achieve this functionality by first converting the text hint in the textbox into a wildcard query and then using that to filter the n-best list as well as to retrieve additional matches from the RegEx engine. In principle, the innovation acknowledges that the query should be used to bias the recognition of the utterance in the speech engine itself.
  • Referring now to FIG. 9, there is illustrated a block diagram of a computer operable to execute the disclosed architecture. In order to provide additional context for various aspects of the subject innovation, FIG. 9 and the following discussion are intended to provide a brief, general description of a suitable computing environment 900 in which the various aspects of the innovation can be implemented. While the innovation has been described above in the general context of computer-executable instructions that may run on one or more computers, those skilled in the art will recognize that the innovation also can be implemented in combination with other program modules and/or as a combination of hardware and software.
  • Generally, program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the inventive methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices.
  • The illustrated aspects of the innovation may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.
  • A computer typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media can comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
  • Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
  • With reference again to FIG. 9, the exemplary environment 900 for implementing various aspects of the innovation includes a computer 902, the computer 902 including a processing unit 904, a system memory 906 and a system bus 908. The system bus 908 couples system components including, but not limited to, the system memory 906 to the processing unit 904. The processing unit 904 can be any of various commercially available processors. Dual microprocessors and other multi-processor architectures may also be employed as the processing unit 904.
  • The system bus 908 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. The system memory 906 includes read-only memory (ROM) 910 and random access memory (RAM) 912. A basic input/output system (BIOS) is stored in a non-volatile memory 910 such as ROM, EPROM, EEPROM, which BIOS contains the basic routines that help to transfer information between elements within the computer 902, such as during start-up. The RAM 912 can also include a high-speed RAM such as static RAM for caching data.
  • The computer 902 further includes an internal hard disk drive (HDD) 914 (e.g., EIDE, SATA), which internal hard disk drive 914 may also be configured for external use in a suitable chassis (not shown), a magnetic floppy disk drive (FDD) 916, (e.g., to read from or write to a removable diskette 918) and an optical disk drive 920, (e.g., reading a CD-ROM disk 922 or, to read from or write to other high capacity optical media such as the DVD). The hard disk drive 914, magnetic disk drive 916 and optical disk drive 920 can be connected to the system bus 908 by a hard disk drive interface 924, a magnetic disk drive interface 926 and an optical drive interface 928, respectively. The interface 924 for external drive implementations includes at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies. Other external drive connection technologies are within contemplation of the subject innovation.
  • The drives and their associated computer-readable media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For the computer 902, the drives and media accommodate the storage of any data in a suitable digital format. Although the description of computer-readable media above refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD, it should be appreciated by those skilled in the art that other types of media which are readable by a computer, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the exemplary operating environment, and further, that any such media may contain computer-executable instructions for performing the methods of the innovation.
  • A number of program modules can be stored in the drives and RAM 912, including an operating system 930, one or more application programs 932, other program modules 934 and program data 936. All or portions of the operating system, applications, modules, and/or data can also be cached in the RAM 912. It is appreciated that the innovation can be implemented with various commercially available operating systems or combinations of operating systems.
  • A user can enter commands and information into the computer 902 through one or more wired/wireless input devices, e.g., a keyboard 938 and a pointing device, such as a mouse 940. Other input devices (not shown) may include a microphone, an IR remote control, a joystick, a game pad, a stylus pen, touch screen, or the like. These and other input devices are often connected to the processing unit 904 through an input device interface 942 that is coupled to the system bus 908, but can be connected by other interfaces, such as a parallel port, an IEEE 1394 serial port, a game port, a USB port, an IR interface, etc.
  • A monitor 944 or other type of display device is also connected to the system bus 908 via an interface, such as a video adapter 946. In addition to the monitor 944, a computer typically includes other peripheral output devices (not shown), such as speakers, printers, etc.
  • The computer 902 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer(s) 948. The remote computer(s) 948 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 902, although, for purposes of brevity, only a memory/storage device 950 is illustrated. The logical connections depicted include wired/wireless connectivity to a local area network (LAN) 952 and/or larger networks, e.g., a wide area network (WAN) 954. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet.
  • When used in a LAN networking environment, the computer 902 is connected to the local network 952 through a wired and/or wireless communication network interface or adapter 956. The adapter 956 may facilitate wired or wireless communication to the LAN 952, which may also include a wireless access point disposed thereon for communicating with the wireless adapter 956.
  • When used in a WAN networking environment, the computer 902 can include a modem 958, or is connected to a communications server on the WAN 954, or has other means for establishing communications over the WAN 954, such as by way of the Internet. The modem 958, which can be internal or external and a wired or wireless device, is connected to the system bus 908 via the serial port interface 942. In a networked environment, program modules depicted relative to the computer 902, or portions thereof, can be stored in the remote memory/storage device 950. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used.
  • The computer 902 is operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone. This includes at least Wi-Fi and Bluetooth™ wireless technologies. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices.
  • Wi-Fi, or Wireless Fidelity, allows connection to the Internet from a couch at home, a bed in a hotel room, or a conference room at work, without wires. Wi-Fi is a wireless technology similar to that used in a cell phone that enables such devices, e.g., computers, to send and receive data indoors and out; anywhere within the range of a base station. Wi-Fi networks use radio technologies called IEEE 802.11 (a, b, g, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE 802.3 or Ethernet). Wi-Fi networks operate in the unlicensed 2.4 and 5 GHz radio bands, at an 11 Mbps (802.11a) or 54 Mbps (802.11b) data rate, for example, or with products that contain both bands (dual band), so the networks can provide real-world performance similar to the basic 10 BaseT wired Ethernet networks used in many offices.
  • Referring now to FIG. 10, there is illustrated a schematic block diagram of an exemplary computing environment 1000 in accordance with the subject innovation. The system 1000 includes one or more client(s) 1002. The client(s) 1002 can be hardware and/or software (e.g., threads, processes, computing devices). The client(s) 1002 can house cookie(s) and/or associated contextual information by employing the innovation, for example.
  • The system 1000 also includes one or more server(s) 1004. The server(s) 1004 can also be hardware and/or software (e.g., threads, processes, computing devices). The servers 1004 can house threads to perform transformations by employing the innovation, for example. One possible communication between a client 1002 and a server 1004 can be in the form of a data packet adapted to be transmitted between two or more computer processes. The data packet may include a cookie and/or associated contextual information, for example. The system 1000 includes a communication framework 1006 (e.g., a global communication network such as the Internet) that can be employed to facilitate communications between the client(s) 1002 and the server(s) 1004.
  • Communications can be facilitated via a wired (including optical fiber) and/or wireless technology. The client(s) 1002 are operatively connected to one or more client data store(s) 1008 that can be employed to store information local to the client(s) 1002 (e.g., cookie(s) and/or associated contextual information). Similarly, the server(s) 1004 are operatively connected to one or more server data store(s) 1010 that can be employed to store information local to the servers 1004.
  • What has been described above includes examples of the innovation. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the subject innovation, but one of ordinary skill in the art may recognize that many further combinations and permutations of the innovation are possible. Accordingly, the innovation is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.

Claims (20)

1. A system that facilitates multi-modal search, comprising:
a query administration component that converts a multi-modal input into a wildcard search query; and
a search engine component that employs the wildcard search query to retrieve a list of query suggestion results.
2. The system of claim 1, further comprising:
a query generation component that employs a plurality of modalities to generate the wildcard search query; and
an analysis component that evaluates the wildcard search query and renders the list of query suggestion results as a function of the search query
3. The system of claim 2, wherein the plurality of modalities includes at least two of text, touch or speech.
4. The system of claim 2, wherein the query generation component facilitates generation of the wildcard search query based upon at least a portion of the list of query suggestion results.
5. The system of claim 1, wherein the list of query suggestion results includes one of an n-best list or alternates list from a speech recognizer and a list of supplementary results that includes at least one of an ‘exact’ match via a wildcard expression or an ‘approximate’ match via an information retrieval algorithm.
6. The system of claim 5, wherein a wildcard expression is generated from at least part of the n-best list obtained from a speech recognizer and used to retrieve items in an index or database which match the substrings of the wildcard search query.
7. The system of claim 5, wherein at least part of the n-best list obtained from the speech recognizer is submitted as a query to an information retrieval algorithm that is indifferent to the order of words in the wildcard search query.
8. The system of claim 7, wherein the information retrieval algorithm is a Term Frequency Inverse Document Frequency (TFIDF) algorithm.
9. The system of claim 2, wherein the query generation component employs user generated text to constrain speech recognition upon generating the wildcard search query.
10. The system of claim 2, wherein the query generation component dynamically converts a user input into a wildcard, and wherein the analysis component employs the wildcard to retrieve a subset of the suggested query results.
11. The system of claim 10, wherein a user conveys uncertainty, and wherein the wildcard search query is a regular expression query.
12. The system of claim 1, further comprising an artificial intelligence (AI) component that employs at least one of a probabilistic and a statistical-based analysis that infers an action that a user desires to be automatically performed.
13. A computer-implemented method of multi-modal search, comprising:
receiving a multi-modal input from a user;
establishing a wildcard query based upon portions of the multi-modal input; and
rendering a plurality of suggested query results based upon the wildcard query.
14. The computer-implemented method of claim 13, wherein the multi-modal input includes at least two of text, speech, touch or gesture input.
15. The computer-implemented method of claim 13, further comprising:
converting a portion of the multi-modal input into a wildcard; and
retrieving a subset of the query suggestion results based upon the wildcard.
16. The computer-implemented method of claim 13, further comprising analyzing the input as a function of an algorithm irrespective of word order.
17. The computer-implemented method of claim 16, wherein the algorithm is a TFIDF algorithm.
18. The computer-implemented method of claim 13, wherein the multi-modal input includes at least a text hint coupled with a spoken input.
19. A computer-executable system that facilitates generation of a wildcard search query based upon a multi-modal input, comprising:
means for receiving the multi-modal input from a user, wherein the multi-modal input includes at least two of text, speech, touch or gesture input;
means for analyzing the multi-modal input irrespective of order or portions of the order; and
means for generating the wildcard search query based upon the analysis.
20. The computer-executable system of claim 19, further comprising means for generating a wildcard based at least in part upon a portion of the multi-modal input, wherein the wildcard search query employs the wildcard to match zero or more characters.
US12/200,648 2008-05-14 2008-08-28 Multi-modal query generation Abandoned US20090287626A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/200,648 US20090287626A1 (en) 2008-05-14 2008-08-28 Multi-modal query generation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US5321408P 2008-05-14 2008-05-14
US12/200,648 US20090287626A1 (en) 2008-05-14 2008-08-28 Multi-modal query generation

Publications (1)

Publication Number Publication Date
US20090287626A1 true US20090287626A1 (en) 2009-11-19

Family

ID=41317081

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/200,625 Active 2030-01-29 US8090738B2 (en) 2008-05-14 2008-08-28 Multi-modal search wildcards
US12/200,584 Abandoned US20090287680A1 (en) 2008-05-14 2008-08-28 Multi-modal query refinement
US12/200,648 Abandoned US20090287626A1 (en) 2008-05-14 2008-08-28 Multi-modal query generation

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US12/200,625 Active 2030-01-29 US8090738B2 (en) 2008-05-14 2008-08-28 Multi-modal search wildcards
US12/200,584 Abandoned US20090287680A1 (en) 2008-05-14 2008-08-28 Multi-modal query refinement

Country Status (1)

Country Link
US (3) US8090738B2 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090287680A1 (en) * 2008-05-14 2009-11-19 Microsoft Corporation Multi-modal query refinement
US20110145224A1 (en) * 2009-12-15 2011-06-16 At&T Intellectual Property I.L.P. System and method for speech-based incremental search
US20110145214A1 (en) * 2009-12-16 2011-06-16 Motorola, Inc. Voice web search
US20110201387A1 (en) * 2010-02-12 2011-08-18 Microsoft Corporation Real-time typing assistance
US20120173244A1 (en) * 2011-01-04 2012-07-05 Kwak Byung-Kwan Apparatus and method for voice command recognition based on a combination of dialog models
US20120209590A1 (en) * 2011-02-16 2012-08-16 International Business Machines Corporation Translated sentence quality estimation
US8249876B1 (en) 2012-01-03 2012-08-21 Google Inc. Method for providing alternative interpretations of a voice input to a user
US20120278308A1 (en) * 2009-12-30 2012-11-01 Google Inc. Custom search query suggestion tools
US20140019462A1 (en) * 2012-07-15 2014-01-16 Microsoft Corporation Contextual query adjustments using natural action input
US20140172892A1 (en) * 2012-12-18 2014-06-19 Microsoft Corporation Queryless search based on context
US8788273B2 (en) 2012-02-15 2014-07-22 Robbie Donald EDGAR Method for quick scroll search using speech recognition
US20140258323A1 (en) * 2013-03-06 2014-09-11 Nuance Communications, Inc. Task assistant
US20140258324A1 (en) * 2013-03-06 2014-09-11 Nuance Communications, Inc. Task assistant utilizing context for improved interaction
US20150248882A1 (en) * 2012-07-09 2015-09-03 Nuance Communications, Inc. Detecting potential significant errors in speech recognition results
US9129606B2 (en) 2011-09-23 2015-09-08 Microsoft Technology Licensing, Llc User query history expansion for improving language model adaptation
EP2947584A1 (en) * 2014-05-23 2015-11-25 Samsung Electronics Co., Ltd Multimodal search method and device
EP2568371A3 (en) * 2011-09-08 2016-01-06 Lg Electronics Inc. Mobile terminal and controlling method thereof
US20160050260A1 (en) * 2008-03-27 2016-02-18 Trung (Tim) Trinh Method, system and apparatus for controlling an application
US9348417B2 (en) 2010-11-01 2016-05-24 Microsoft Technology Licensing, Llc Multimodal input system
EP3001333A4 (en) * 2014-05-15 2016-08-24 Huawei Tech Co Ltd Object search method and apparatus
CN106446122A (en) * 2016-09-19 2017-02-22 华为技术有限公司 Information retrieval method and device and computation device
US9990433B2 (en) 2014-05-23 2018-06-05 Samsung Electronics Co., Ltd. Method for searching and device thereof
US10409851B2 (en) 2011-01-31 2019-09-10 Microsoft Technology Licensing, Llc Gesture-based search
US20190297189A1 (en) * 2000-02-04 2019-09-26 Parus Holdings, Inc. Personal Voice-Based Information Retrieval System
US10444979B2 (en) 2011-01-31 2019-10-15 Microsoft Technology Licensing, Llc Gesture-based search
CN111159472A (en) * 2018-11-08 2020-05-15 微软技术许可有限责任公司 Multi-modal chat techniques
US10671182B2 (en) * 2014-10-16 2020-06-02 Touchtype Limited Text prediction integration
US10795528B2 (en) 2013-03-06 2020-10-06 Nuance Communications, Inc. Task assistant having multiple visual displays
US10984337B2 (en) 2012-02-29 2021-04-20 Microsoft Technology Licensing, Llc Context-based search query formation
CN113204669A (en) * 2021-06-08 2021-08-03 武汉亿融信科科技有限公司 Short video search recommendation method and system based on voice recognition and computer storage medium
CN113656546A (en) * 2021-08-17 2021-11-16 百度在线网络技术(北京)有限公司 Multimodal search method, apparatus, device, storage medium, and program product
US11314826B2 (en) 2014-05-23 2022-04-26 Samsung Electronics Co., Ltd. Method for searching and device thereof
WO2023074916A1 (en) * 2021-10-29 2023-05-04 Tesnology Inc. Data transaction management with database on edge device

Families Citing this family (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8073681B2 (en) 2006-10-16 2011-12-06 Voicebox Technologies, Inc. System and method for a cooperative conversational voice user interface
US7818176B2 (en) 2007-02-06 2010-10-19 Voicebox Technologies, Inc. System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US8019742B1 (en) 2007-05-31 2011-09-13 Google Inc. Identifying related queries
US8140335B2 (en) 2007-12-11 2012-03-20 Voicebox Technologies, Inc. System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US9305548B2 (en) 2008-05-27 2016-04-05 Voicebox Technologies Corporation System and method for an integrated, multi-modal, multi-device natural language voice services environment
US9183323B1 (en) 2008-06-27 2015-11-10 Google Inc. Suggesting alternative query phrases in query results
WO2010044123A1 (en) * 2008-10-14 2010-04-22 三菱電機株式会社 Search device, search index creating device, and search system
EP2211336B1 (en) * 2009-01-23 2014-10-08 Harman Becker Automotive Systems GmbH Improved speech input using navigation information
US8326637B2 (en) 2009-02-20 2012-12-04 Voicebox Technologies, Inc. System and method for processing multi-modal device interactions in a natural language voice services environment
US10176162B2 (en) * 2009-02-27 2019-01-08 Blackberry Limited System and method for improved address entry
CN102365639B (en) * 2009-04-06 2014-11-26 三菱电机株式会社 Retrieval device
CA2772638C (en) * 2009-08-31 2018-02-13 Google Inc. Framework for selecting and presenting answer boxes relevant to user input as query suggestions
US8504437B1 (en) 2009-11-04 2013-08-06 Google Inc. Dynamically selecting and presenting content relevant to user input
US8676828B1 (en) * 2009-11-04 2014-03-18 Google Inc. Selecting and presenting content relevant to user input
US11416214B2 (en) 2009-12-23 2022-08-16 Google Llc Multi-modal input on an electronic device
EP4318463A3 (en) 2009-12-23 2024-02-28 Google LLC Multi-modal input on an electronic device
US8914401B2 (en) * 2009-12-30 2014-12-16 At&T Intellectual Property I, L.P. System and method for an N-best list interface
US8849785B1 (en) * 2010-01-15 2014-09-30 Google Inc. Search query reformulation using result term occurrence count
US8650210B1 (en) 2010-02-09 2014-02-11 Google Inc. Identifying non-search actions based on a search query
WO2012024585A1 (en) 2010-08-19 2012-02-23 Othar Hansson Predictive query completion and predictive search results
US9449026B2 (en) * 2010-08-31 2016-09-20 Microsoft Technology Licensing, Llc Sketch-based image search
US8352245B1 (en) 2010-12-30 2013-01-08 Google Inc. Adjusting language models
US8473507B2 (en) * 2011-01-14 2013-06-25 Apple Inc. Tokenized search suggestions
US8296142B2 (en) 2011-01-21 2012-10-23 Google Inc. Speech recognition using dock context
US8527483B2 (en) 2011-02-04 2013-09-03 Mikko VÄÄNÄNEN Method and means for browsing by walking
US8688667B1 (en) * 2011-02-08 2014-04-01 Google Inc. Providing intent sensitive search results
US9575994B2 (en) * 2011-02-11 2017-02-21 Siemens Aktiengesellschaft Methods and devices for data retrieval
US8479110B2 (en) * 2011-03-20 2013-07-02 William J. Johnson System and method for summoning user interface objects
DE102011101146A1 (en) * 2011-05-11 2012-11-15 Abb Technology Ag Multi-level method and device for interactive retrieval of device data of an automation system
US8577913B1 (en) 2011-05-27 2013-11-05 Google Inc. Generating midstring query refinements
US8849791B1 (en) * 2011-06-29 2014-09-30 Amazon Technologies, Inc. Assisted shopping
US8630851B1 (en) 2011-06-29 2014-01-14 Amazon Technologies, Inc. Assisted shopping
US8798995B1 (en) 2011-09-23 2014-08-05 Amazon Technologies, Inc. Key word determinations from voice data
US8930393B1 (en) * 2011-10-05 2015-01-06 Google Inc. Referent based search suggestions
US10013152B2 (en) 2011-10-05 2018-07-03 Google Llc Content selection disambiguation
US9081829B2 (en) * 2011-10-05 2015-07-14 Cumulus Systems Incorporated System for organizing and fast searching of massive amounts of data
WO2013052866A2 (en) 2011-10-05 2013-04-11 Google Inc. Semantic selection and purpose facilitation
US20130091266A1 (en) 2011-10-05 2013-04-11 Ajit Bhave System for organizing and fast searching of massive amounts of data
US9081834B2 (en) * 2011-10-05 2015-07-14 Cumulus Systems Incorporated Process for gathering and special data structure for storing performance metric data
CN103946838B (en) * 2011-11-24 2017-10-24 微软技术许可有限责任公司 Interactive multi-mode image search
US20130226892A1 (en) * 2012-02-29 2013-08-29 Fluential, Llc Multimodal natural language interface for faceted search
JP2013232026A (en) * 2012-04-27 2013-11-14 Sharp Corp Portable information terminal
US10019991B2 (en) * 2012-05-02 2018-07-10 Electronics And Telecommunications Research Institute Apparatus and method for speech recognition
US9684395B2 (en) * 2012-06-02 2017-06-20 Tara Chand Singhal System and method for context driven voice interface in handheld wireless mobile devices
US9595298B2 (en) 2012-07-18 2017-03-14 Microsoft Technology Licensing, Llc Transforming data to create layouts
WO2014014374A1 (en) 2012-07-19 2014-01-23 Yandex Europe Ag Search query suggestions based in part on a prior search
US20140046922A1 (en) * 2012-08-08 2014-02-13 Microsoft Corporation Search user interface using outward physical expressions
US8751963B1 (en) 2013-01-23 2014-06-10 Splunk Inc. Real time indication of previously extracted data fields for regular expressions
US8751499B1 (en) 2013-01-22 2014-06-10 Splunk Inc. Variable representative sampling under resource constraints
US9594814B2 (en) 2012-09-07 2017-03-14 Splunk Inc. Advanced field extractor with modification of an extracted field
US10394946B2 (en) 2012-09-07 2019-08-27 Splunk Inc. Refining extraction rules based on selected text within events
US20140208217A1 (en) 2013-01-22 2014-07-24 Splunk Inc. Interface for managing splittable timestamps across event records
US8682906B1 (en) 2013-01-23 2014-03-25 Splunk Inc. Real time display of data field values based on manual editing of regular expressions
US9753909B2 (en) 2012-09-07 2017-09-05 Splunk, Inc. Advanced field extractor with multiple positive examples
US8909642B2 (en) 2013-01-23 2014-12-09 Splunk Inc. Automatic generation of a field-extraction rule based on selections in a sample event
US9152929B2 (en) * 2013-01-23 2015-10-06 Splunk Inc. Real time display of statistics and values for selected regular expressions
US20140207758A1 (en) * 2013-01-24 2014-07-24 Huawei Technologies Co., Ltd. Thread Object-Based Search Method and Apparatus
US9147125B2 (en) 2013-05-03 2015-09-29 Microsoft Technology Licensing, Llc Hand-drawn sketch recognition
US9672287B2 (en) 2013-12-26 2017-06-06 Thomson Licensing Method and apparatus for gesture-based searching
US9965521B1 (en) * 2014-02-05 2018-05-08 Google Llc Determining a transition probability from one or more past activity indications to one or more subsequent activity indications
US9842592B2 (en) 2014-02-12 2017-12-12 Google Inc. Language models using non-linguistic context
US9412365B2 (en) 2014-03-24 2016-08-09 Google Inc. Enhanced maximum entropy models
US9953646B2 (en) 2014-09-02 2018-04-24 Belleau Technologies Method and system for dynamic speech recognition and tracking of prewritten script
US9898459B2 (en) 2014-09-16 2018-02-20 Voicebox Technologies Corporation Integration of domain information into state transitions of a finite state transducer for natural language processing
CN107003996A (en) 2014-09-16 2017-08-01 声钰科技 VCommerce
US9626768B2 (en) 2014-09-30 2017-04-18 Microsoft Technology Licensing, Llc Optimizing a visual perspective of media
US10282069B2 (en) * 2014-09-30 2019-05-07 Microsoft Technology Licensing, Llc Dynamic presentation of suggested content
WO2016061309A1 (en) 2014-10-15 2016-04-21 Voicebox Technologies Corporation System and method for providing follow-up responses to prior natural language inputs of a user
US10276158B2 (en) 2014-10-31 2019-04-30 At&T Intellectual Property I, L.P. System and method for initiating multi-modal speech recognition using a long-touch gesture
US10614799B2 (en) 2014-11-26 2020-04-07 Voicebox Technologies Corporation System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance
US10431214B2 (en) 2014-11-26 2019-10-01 Voicebox Technologies Corporation System and method of determining a domain and/or an action related to a natural language input
US20160171122A1 (en) * 2014-12-10 2016-06-16 Ford Global Technologies, Llc Multimodal search response
US10134394B2 (en) 2015-03-20 2018-11-20 Google Llc Speech recognition using log-linear model
US9443519B1 (en) 2015-09-09 2016-09-13 Google Inc. Reducing latency caused by switching input modalities
JP6481643B2 (en) * 2016-03-08 2019-03-13 トヨタ自動車株式会社 Audio processing system and audio processing method
US9978367B2 (en) 2016-03-16 2018-05-22 Google Llc Determining dialog states for language models
WO2017180153A1 (en) * 2016-04-15 2017-10-19 Entit Software Llc Removing wildcard tokens from a set of wildcard tokens for a search query
US9785715B1 (en) * 2016-04-29 2017-10-10 Conversable, Inc. Systems, media, and methods for automated response to queries made by interactive electronic chat
CN106021402A (en) * 2016-05-13 2016-10-12 河南师范大学 Multi-modal multi-class Boosting frame construction method and device for cross-modal retrieval
WO2018023106A1 (en) 2016-07-29 2018-02-01 Erik SWART System and method of disambiguating natural language processing requests
US10832664B2 (en) 2016-08-19 2020-11-10 Google Llc Automated speech recognition using language models that selectively use domain-specific model components
US10956503B2 (en) * 2016-09-20 2021-03-23 Salesforce.Com, Inc. Suggesting query items based on frequent item sets
US10380228B2 (en) 2017-02-10 2019-08-13 Microsoft Technology Licensing, Llc Output generation based on semantic expressions
US10311860B2 (en) 2017-02-14 2019-06-04 Google Llc Language model biasing system
US20190108276A1 (en) * 2017-10-10 2019-04-11 NEGENTROPICS Mesterséges Intelligencia Kutató és Fejlesztõ Kft Methods and system for semantic search in large databases
US10552410B2 (en) 2017-11-14 2020-02-04 Mindbridge Analytics Inc. Method and system for presenting a user selectable interface in response to a natural language request
JP2021144065A (en) * 2018-06-12 2021-09-24 ソニーグループ株式会社 Information processing device and information processing method
CN117033724A (en) * 2023-08-24 2023-11-10 青海昇云信息科技有限公司 Multi-mode data retrieval method based on semantic association

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020123876A1 (en) * 2000-12-30 2002-09-05 Shuvranshu Pokhariyal Specifying arbitrary words in rule-based grammars
US6564213B1 (en) * 2000-04-18 2003-05-13 Amazon.Com, Inc. Search query autocompletion
US20040054541A1 (en) * 2002-09-16 2004-03-18 David Kryze System and method of media file access and retrieval using speech recognition
US20050283364A1 (en) * 1998-12-04 2005-12-22 Michael Longe Multimodal disambiguation of speech recognition
US7027987B1 (en) * 2001-02-07 2006-04-11 Google Inc. Voice interface for a search engine
US7096218B2 (en) * 2002-01-14 2006-08-22 International Business Machines Corporation Search refinement graphical user interface
US20060190436A1 (en) * 2005-02-23 2006-08-24 Microsoft Corporation Dynamic client interaction for search
US20060190256A1 (en) * 1998-12-04 2006-08-24 James Stephanick Method and apparatus utilizing voice input to resolve ambiguous manually entered text input
US20060293890A1 (en) * 2005-06-28 2006-12-28 Avaya Technology Corp. Speech recognition assisted autocompletion of composite characters
US20070022005A1 (en) * 2005-07-21 2007-01-25 Hanna Nader G Method for requesting, displaying, and facilitating placement of an advertisement in a computer network
US20070050191A1 (en) * 2005-08-29 2007-03-01 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US20070061336A1 (en) * 2005-09-14 2007-03-15 Jorey Ramer Presentation of sponsored content based on mobile transaction event
US20070067345A1 (en) * 2005-09-21 2007-03-22 Microsoft Corporation Generating search requests from multimodal queries
US20070162422A1 (en) * 2005-12-30 2007-07-12 George Djabarov Dynamic search box for web browser
US20070164782A1 (en) * 2006-01-17 2007-07-19 Microsoft Corporation Multi-word word wheeling
US7277029B2 (en) * 2005-06-23 2007-10-02 Microsoft Corporation Using language models to expand wildcards
US20070239670A1 (en) * 2004-10-20 2007-10-11 International Business Machines Corporation Optimization-based data content determination
US20070299824A1 (en) * 2006-06-27 2007-12-27 International Business Machines Corporation Hybrid approach for query recommendation in conversation systems
US20080086311A1 (en) * 2006-04-11 2008-04-10 Conwell William Y Speech Recognition, and Related Systems
US20080162471A1 (en) * 2005-01-24 2008-07-03 Bernard David E Multimodal natural language query system for processing and analyzing voice and proximity-based queries
US20090006343A1 (en) * 2007-06-28 2009-01-01 Microsoft Corporation Machine assisted query formulation
US20090019002A1 (en) * 2007-07-13 2009-01-15 Medio Systems, Inc. Personalized query completion suggestion
US20100125457A1 (en) * 2008-11-19 2010-05-20 At&T Intellectual Property I, L.P. System and method for discriminative pronunciation modeling for voice search
US7778837B2 (en) * 2006-05-01 2010-08-17 Microsoft Corporation Demographic based classification for local word wheeling/web search
US7797303B2 (en) * 2006-02-15 2010-09-14 Xerox Corporation Natural language processing for developing queries

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070061335A1 (en) * 2005-09-14 2007-03-15 Jorey Ramer Multimodal search query processing
US8090738B2 (en) * 2008-05-14 2012-01-03 Microsoft Corporation Multi-modal search wildcards

Patent Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060190256A1 (en) * 1998-12-04 2006-08-24 James Stephanick Method and apparatus utilizing voice input to resolve ambiguous manually entered text input
US20050283364A1 (en) * 1998-12-04 2005-12-22 Michael Longe Multimodal disambiguation of speech recognition
US6564213B1 (en) * 2000-04-18 2003-05-13 Amazon.Com, Inc. Search query autocompletion
US20020123876A1 (en) * 2000-12-30 2002-09-05 Shuvranshu Pokhariyal Specifying arbitrary words in rule-based grammars
US7027987B1 (en) * 2001-02-07 2006-04-11 Google Inc. Voice interface for a search engine
US7096218B2 (en) * 2002-01-14 2006-08-22 International Business Machines Corporation Search refinement graphical user interface
US20040054541A1 (en) * 2002-09-16 2004-03-18 David Kryze System and method of media file access and retrieval using speech recognition
US20070239670A1 (en) * 2004-10-20 2007-10-11 International Business Machines Corporation Optimization-based data content determination
US20080162471A1 (en) * 2005-01-24 2008-07-03 Bernard David E Multimodal natural language query system for processing and analyzing voice and proximity-based queries
US20060190436A1 (en) * 2005-02-23 2006-08-24 Microsoft Corporation Dynamic client interaction for search
US7461059B2 (en) * 2005-02-23 2008-12-02 Microsoft Corporation Dynamically updated search results based upon continuously-evolving search query that is based at least in part upon phrase suggestion, search engine uses previous result sets performing additional search tasks
US7277029B2 (en) * 2005-06-23 2007-10-02 Microsoft Corporation Using language models to expand wildcards
US20060293890A1 (en) * 2005-06-28 2006-12-28 Avaya Technology Corp. Speech recognition assisted autocompletion of composite characters
US20070022005A1 (en) * 2005-07-21 2007-01-25 Hanna Nader G Method for requesting, displaying, and facilitating placement of an advertisement in a computer network
US20070050191A1 (en) * 2005-08-29 2007-03-01 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US20070061336A1 (en) * 2005-09-14 2007-03-15 Jorey Ramer Presentation of sponsored content based on mobile transaction event
US20070067345A1 (en) * 2005-09-21 2007-03-22 Microsoft Corporation Generating search requests from multimodal queries
US20070162422A1 (en) * 2005-12-30 2007-07-12 George Djabarov Dynamic search box for web browser
US20070164782A1 (en) * 2006-01-17 2007-07-19 Microsoft Corporation Multi-word word wheeling
US7797303B2 (en) * 2006-02-15 2010-09-14 Xerox Corporation Natural language processing for developing queries
US20080086311A1 (en) * 2006-04-11 2008-04-10 Conwell William Y Speech Recognition, and Related Systems
US7778837B2 (en) * 2006-05-01 2010-08-17 Microsoft Corporation Demographic based classification for local word wheeling/web search
US20070299824A1 (en) * 2006-06-27 2007-12-27 International Business Machines Corporation Hybrid approach for query recommendation in conversation systems
US20080215555A1 (en) * 2006-06-27 2008-09-04 International Business Machines Corporation Hybrid Approach for Query Recommendation in Conversation Systems
US20090006343A1 (en) * 2007-06-28 2009-01-01 Microsoft Corporation Machine assisted query formulation
US20090019002A1 (en) * 2007-07-13 2009-01-15 Medio Systems, Inc. Personalized query completion suggestion
US20100125457A1 (en) * 2008-11-19 2010-05-20 At&T Intellectual Property I, L.P. System and method for discriminative pronunciation modeling for voice search

Cited By (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190297189A1 (en) * 2000-02-04 2019-09-26 Parus Holdings, Inc. Personal Voice-Based Information Retrieval System
US9843626B2 (en) * 2008-03-27 2017-12-12 Mitel Networks Corporation Method, system and apparatus for controlling an application
US20160050260A1 (en) * 2008-03-27 2016-02-18 Trung (Tim) Trinh Method, system and apparatus for controlling an application
US20090287680A1 (en) * 2008-05-14 2009-11-19 Microsoft Corporation Multi-modal query refinement
US20110145224A1 (en) * 2009-12-15 2011-06-16 At&T Intellectual Property I.L.P. System and method for speech-based incremental search
US8903793B2 (en) * 2009-12-15 2014-12-02 At&T Intellectual Property I, L.P. System and method for speech-based incremental search
US9396252B2 (en) 2009-12-15 2016-07-19 At&T Intellectual Property I, L.P. System and method for speech-based incremental search
US20110145214A1 (en) * 2009-12-16 2011-06-16 Motorola, Inc. Voice web search
US9081868B2 (en) * 2009-12-16 2015-07-14 Google Technology Holdings LLC Voice web search
US20120278308A1 (en) * 2009-12-30 2012-11-01 Google Inc. Custom search query suggestion tools
US9613015B2 (en) 2010-02-12 2017-04-04 Microsoft Technology Licensing, Llc User-centric soft keyboard predictive technologies
US10126936B2 (en) 2010-02-12 2018-11-13 Microsoft Technology Licensing, Llc Typing assistance for editing
US8782556B2 (en) 2010-02-12 2014-07-15 Microsoft Corporation User-centric soft keyboard predictive technologies
US10156981B2 (en) 2010-02-12 2018-12-18 Microsoft Technology Licensing, Llc User-centric soft keyboard predictive technologies
US20110202836A1 (en) * 2010-02-12 2011-08-18 Microsoft Corporation Typing assistance for editing
US20110202876A1 (en) * 2010-02-12 2011-08-18 Microsoft Corporation User-centric soft keyboard predictive technologies
US20110201387A1 (en) * 2010-02-12 2011-08-18 Microsoft Corporation Real-time typing assistance
US9165257B2 (en) 2010-02-12 2015-10-20 Microsoft Technology Licensing, Llc Typing assistance for editing
US9348417B2 (en) 2010-11-01 2016-05-24 Microsoft Technology Licensing, Llc Multimodal input system
US10067740B2 (en) 2010-11-01 2018-09-04 Microsoft Technology Licensing, Llc Multimodal input system
US20120173244A1 (en) * 2011-01-04 2012-07-05 Kwak Byung-Kwan Apparatus and method for voice command recognition based on a combination of dialog models
US8954326B2 (en) * 2011-01-04 2015-02-10 Samsung Electronics Co., Ltd. Apparatus and method for voice command recognition based on a combination of dialog models
US10409851B2 (en) 2011-01-31 2019-09-10 Microsoft Technology Licensing, Llc Gesture-based search
US10444979B2 (en) 2011-01-31 2019-10-15 Microsoft Technology Licensing, Llc Gesture-based search
US20120209590A1 (en) * 2011-02-16 2012-08-16 International Business Machines Corporation Translated sentence quality estimation
EP2568371A3 (en) * 2011-09-08 2016-01-06 Lg Electronics Inc. Mobile terminal and controlling method thereof
US9329747B2 (en) 2011-09-08 2016-05-03 Lg Electronics Inc. Mobile terminal and controlling method thereof
KR101852821B1 (en) 2011-09-08 2018-04-27 엘지전자 주식회사 Mobile terminal and method for controlling the same
US9299342B2 (en) 2011-09-23 2016-03-29 Microsoft Technology Licensing, Llc User query history expansion for improving language model adaptation
US9129606B2 (en) 2011-09-23 2015-09-08 Microsoft Technology Licensing, Llc User query history expansion for improving language model adaptation
US8249876B1 (en) 2012-01-03 2012-08-21 Google Inc. Method for providing alternative interpretations of a voice input to a user
US8788273B2 (en) 2012-02-15 2014-07-22 Robbie Donald EDGAR Method for quick scroll search using speech recognition
US10984337B2 (en) 2012-02-29 2021-04-20 Microsoft Technology Licensing, Llc Context-based search query formation
US11495208B2 (en) * 2012-07-09 2022-11-08 Nuance Communications, Inc. Detecting potential significant errors in speech recognition results
US20150248882A1 (en) * 2012-07-09 2015-09-03 Nuance Communications, Inc. Detecting potential significant errors in speech recognition results
US9818398B2 (en) * 2012-07-09 2017-11-14 Nuance Communications, Inc. Detecting potential significant errors in speech recognition results
US20180158448A1 (en) * 2012-07-09 2018-06-07 Nuance Communications, Inc. Detecting potential significant errors in speech recognition results
US20140019462A1 (en) * 2012-07-15 2014-01-16 Microsoft Corporation Contextual query adjustments using natural action input
US9977835B2 (en) 2012-12-18 2018-05-22 Microsoft Technology Licensing, Llc Queryless search based on context
US9483518B2 (en) * 2012-12-18 2016-11-01 Microsoft Technology Licensing, Llc Queryless search based on context
US20140172892A1 (en) * 2012-12-18 2014-06-19 Microsoft Corporation Queryless search based on context
US20140258323A1 (en) * 2013-03-06 2014-09-11 Nuance Communications, Inc. Task assistant
US10783139B2 (en) * 2013-03-06 2020-09-22 Nuance Communications, Inc. Task assistant
US20140258324A1 (en) * 2013-03-06 2014-09-11 Nuance Communications, Inc. Task assistant utilizing context for improved interaction
US11372850B2 (en) 2013-03-06 2022-06-28 Nuance Communications, Inc. Task assistant
US10223411B2 (en) * 2013-03-06 2019-03-05 Nuance Communications, Inc. Task assistant utilizing context for improved interaction
US10795528B2 (en) 2013-03-06 2020-10-06 Nuance Communications, Inc. Task assistant having multiple visual displays
US10311115B2 (en) 2014-05-15 2019-06-04 Huawei Technologies Co., Ltd. Object search method and apparatus
EP3001333A4 (en) * 2014-05-15 2016-08-24 Huawei Tech Co Ltd Object search method and apparatus
EP2947584A1 (en) * 2014-05-23 2015-11-25 Samsung Electronics Co., Ltd Multimodal search method and device
US11734370B2 (en) 2014-05-23 2023-08-22 Samsung Electronics Co., Ltd. Method for searching and device thereof
US11080350B2 (en) 2014-05-23 2021-08-03 Samsung Electronics Co., Ltd. Method for searching and device thereof
US11157577B2 (en) 2014-05-23 2021-10-26 Samsung Electronics Co., Ltd. Method for searching and device thereof
US11314826B2 (en) 2014-05-23 2022-04-26 Samsung Electronics Co., Ltd. Method for searching and device thereof
US9990433B2 (en) 2014-05-23 2018-06-05 Samsung Electronics Co., Ltd. Method for searching and device thereof
US10223466B2 (en) 2014-05-23 2019-03-05 Samsung Electronics Co., Ltd. Method for searching and device thereof
US10671182B2 (en) * 2014-10-16 2020-06-02 Touchtype Limited Text prediction integration
CN106446122A (en) * 2016-09-19 2017-02-22 华为技术有限公司 Information retrieval method and device and computation device
CN111159472A (en) * 2018-11-08 2020-05-15 微软技术许可有限责任公司 Multi-modal chat techniques
US11921782B2 (en) 2018-11-08 2024-03-05 Microsoft Technology Licensing, Llc VideoChat
CN113204669A (en) * 2021-06-08 2021-08-03 武汉亿融信科科技有限公司 Short video search recommendation method and system based on voice recognition and computer storage medium
CN113656546A (en) * 2021-08-17 2021-11-16 百度在线网络技术(北京)有限公司 Multimodal search method, apparatus, device, storage medium, and program product
WO2023074916A1 (en) * 2021-10-29 2023-05-04 Tesnology Inc. Data transaction management with database on edge device

Also Published As

Publication number Publication date
US20090287681A1 (en) 2009-11-19
US20090287680A1 (en) 2009-11-19
US8090738B2 (en) 2012-01-03

Similar Documents

Publication Publication Date Title
US20090287626A1 (en) Multi-modal query generation
US11423888B2 (en) Predicting and learning carrier phrases for speech input
US9330661B2 (en) Accuracy improvement of spoken queries transcription using co-occurrence information
US10521479B2 (en) Evaluating semantic interpretations of a search query
US8812534B2 (en) Machine assisted query formulation
US8260809B2 (en) Voice-based search processing
US9256683B2 (en) Dynamic client interaction for search
EP2994908B1 (en) Incremental speech input interface with real time feedback
US7275049B2 (en) Method for speech-based data retrieval on portable devices
JP6204982B2 (en) Contextual query tuning using natural motion input
US7742922B2 (en) Speech interface for search engines
US20100153112A1 (en) Progressively refining a speech-based search
US20090055386A1 (en) System and Method for Enhanced In-Document Searching for Text Applications in a Data Processing System
EP2135180A1 (en) Method and apparatus for distributed voice searching
US20090192991A1 (en) Network information searching method by speech recognition and system for the same
US11526512B1 (en) Rewriting queries
US20090006344A1 (en) Mark-up ecosystem for searching
US11062700B1 (en) Query answering with controlled access knowledge graph
Brøndsted et al. Mobile information access with spoken query answering
Onifade et al. Semantic Similarities in Voice Information Retrieval System for Documents
JP2011065401A (en) Information processor and data processing method and program thereof
KR20070064575A (en) Method and system for recommending query based search index

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PAEK, TIMOTHY SEUNG YOON;THIESSON, BO;JU, YUN-CHENG;AND OTHERS;REEL/FRAME:021458/0692;SIGNING DATES FROM 20080825 TO 20080826

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0509

Effective date: 20141014