doc_id
stringlengths 7
11
| appl_id
stringlengths 8
8
| flag_patent
int64 0
1
| claim_one
stringlengths 13
18.3k
|
---|---|---|---|
9971772 | 15645526 | 1 | 1. A method comprising: sending voice input data received by a media device to a speech-to-text service; receiving, by the media device from the speech-to-text service, a textual representation of at least a portion of the voice input data; generating a signature based on at least a portion of the textual representation, wherein the signature is a hash value; locating a particular data entry among a set of data entries by searching the set of data entries for a data entry matching the signature generated based on the at least a portion of the textual representation, each data entry of the set of data entries specifying a mapping between a given signature and one or more media device actions; updating the set of data entries by storing the mapping between the signature and the at least a portion of the textual representation; in response to locating a particular data entry among the set of data entries based on the generated signature, performing one or more particular media device actions associated with the particular data entry, the one or more particular media device actions including sending a media content query to a media search service; receiving, by the media device, one or more content item listings based on the media content query; and generating for display at least a portion of the one or more content item listings. |
10026037 | 14604610 | 1 | 1. In a computerized knowledge learning system, a method for configuring knowledge sets and AI algorithms, useful in association with an automated messaging system, the method comprising: receiving at least one training message; selecting a subsection of text from the at least one training message; selecting a knowledge set from a plurality of knowledge sets for the selected subsection of text based upon user, industry, and service type, wherein each knowledge set includes probabilistic associations between a term and a category; selecting an insight from a plurality of insights for the selected subsection of text based upon associations of the terms within the subsection of text given the selected knowledge set; categorizing the training message based upon the insight; receiving one of approval or rejection of the categorization; and updating the probabilistic associations in response to the received approval or rejection to improve classification accuracy. |
20150030237 | 13949940 | 0 | 1. A computer-implemented method of restoring an image comprising: receiving, at a processor, a poor quality image; applying the poor quality image to each of a plurality of trained machine learning predictors; obtaining from each of the trained machine learning predictors, a restored version of the poor quality image. |
7606782 | 10296114 | 1 | 1. A computer-based enterprise knowledge management system that represents and stores business knowledge in a natural language and generates program source code or rules to implement said business knowledge for use in business practices, comprising: at least one user interface for receiving said business knowledge in one or more sentences in a natural language; at least one user interface interactively presenting one or more statements from a knowledge manager and one or more production rules from a generator to the user via the user interface; at least one computer-readable memory, containing: (a) said knowledge manager programmed to represent said business knowledge as said at least one statement comprising at least one relationship, said relationship instantiating at least one relation having at least one role and at least one concept filling said role, wherein the relationships are defined using semantic modeling, wherein said representation of said business knowledge is accomplished using the Rete algorithm; and (b) said knowledge manager programmed to process said relationships semantically and syntactically; (c) said generator in communication with said knowledge manager to generate computer program code or production rules that combine syntactic and semantic constraints for implementing said business knowledge; and at least one tangible computer-readable storage medium selected from the group consisting of a temporary memory system and a permanent memory system for storing said statement and said computer program code or production rules to be integrated with external object models and databases. |
8326614 | 12471072 | 1 | 1. A speech enhancement system, comprising: a first device that converts sound waves into operational signals; and a second device that selects one or more templates from a speech codebook that is shared with a third device that comprises an encoder and decoder, where the one or more templates represent spectral shapes and excitation pulses; where the selected one or more templates model speech characteristics of the operational signals. |
20150235651 | 14181374 | 0 | 1. A computer implemented method comprising: receiving, at a processing system, a first signal representing an output of a speaker device; receiving, at the processing system, a second signal comprising (i) the output of the speaker device and (ii) an audio signal corresponding to an utterance of a speaker; aligning, by the processing system, one or more segments of the first signal with one or more segments of the second signal; classifying acoustic features of the one or more segments of the first signal to obtain a first set of vectors associated with speech units; classifying acoustic features of the one or more segments of the second signal to obtain a second set of vectors associated with speech units; modifying the second set of vectors using the first set of vectors to obtain a modified second set of vectors, wherein the modified second set of vectors represents a suppression of the output of the speaker device in the second signal; and providing the modified second set of vectors to generate a transcription of the utterance of the speaker. |
20130144625 | 13757930 | 0 | 1. A computer implemented method, comprising: receiving a user-based selection of a first portion of words in a document, at least of portion of the document being displayed on a user interface on a display device, the document being pre-associated with a first voice model; applying, by the one or more computers, in response to the user-based selection of the first portion of words, a first set of indicia to the user-selected first portion of words in the document; and overwriting the association of the first voice model, by the one or more computers, with a second voice model for the first portion of words. |
20100305942 | 12842778 | 0 | 1. A method of automatic, computer based creation of a cross-index for a set of documents, the method comprising: accessing a memory to read at least a sequence of words from a document in the set of documents; determining by a processing unit a respective score for at least a subset of words in the sequence based at least in part on word length; operating the processing unit to determine a number of the at least a subset of words in the sequence that have a score greater than or equal to a threshold score; operating the processing unit to determine whether the sequence of words contains a number of words that satisfies a verbosity setting; determining that the sequence of words is a significant phrase in response to determining that the number of the at least a subset of words in the sequence that have a score greater than or equal to the threshold score equals or exceeds a predetermined number and determining that the number of words in the sequence satisfies the verbosity setting; and adding the significant phrase to a cross-index for the set of documents in response to determining that the significant phrase has been found in more than one document in the set of documents. |
9843547 | 14564646 | 1 | 1. A system, comprising: a first service device and a second service device, each comprising one or more processors and a service email application, the first service device having at least one first service, the second service device having at least one second service, wherein the service email application of the first service device, when executed at the one or more processors of the first service device, is configured to: retrieve a command email from at least one email server via a network, wherein the command email comprises at least one first command and has an email identifier associated with the at least one first service; extract the at least one first command from the command email; send the extracted first command to the at least one first service such that the at least one first service performs a corresponding first function based on the extracted first command; receive a result from the at least one service performing the first function; generate a result email comprising the result, wherein the result comprises at least one second command, and the result email is subject to be sent to the second service device; and send the result email to the at least one email server; wherein the service email application of the second service device, when executed at the one or more processors of the second service device, is configured to: retrieve the result email generated by the service email application of the first service device from the at least one email server; extract the at least one second command from the result email; and send the extracted second command to the at least one second service such that the at least one second service performs a corresponding second function based on the extracted second command, wherein the service email application of the first service device, when executed, is further configured to: verify the command email based on the email identifier of the command email; decrypt the command email; and encrypt the result email, and wherein the service email application of the first service device comprises: an address verification module, configured to verify the command email based on the email identifier of the command email; an encryption/decryption module, configured to encrypt the result email and decrypt the command email; a command processing module, configured to extract the at least one first command from the command email and send the extracted first command to the at least one first service such that the at least one first service performs the corresponding first function based on the extracted first command; and a service email processing module, configured to retrieve the command email, receive the result, generate the result email, and send the encrypted result email. |
20120233196 | 13510561 | 0 | 1. A computer-implemented method, comprising: receiving, at a data processing apparatus, a first image search query and first image search results that are responsive to the first image search query, the first image search query being one or more terms in a first language; obtaining, by the data processing apparatus, translations of the first image search query, wherein each translation is a translation of the first image search query into a respective second language different from the first language; receiving, at the data processing apparatus, for each translation of the first image search query, respective image search results that are determined to be responsive to the translation of the first image search query when the translation is used as an image search query; providing first instructions to a client device that, when executed by the client device, cause the client device to present a user interface including: one or more of the first image search results responsive to the first image search query; and a respective cross-language search option for each of the translations of the first image search query, the respective cross-language search option for each translation including the translation and a preview of the respective image search results responsive to the translation, wherein each cross-language search result is selectable in the user interface. |
8112402 | 11710805 | 1 | 1. A computer implemented method, performed by a computer having a processor, of disambiguating references to named entities, comprising: identifying a surface form of a named entity in a text, the surface form being an ambiguous orthographic representation of a common name for the named entity, the surface form having a corresponding surface form reference in a surface form reference database; enumerating, from the surface form reference, a plurality of different reference named entities based on the identified surface form of the named entity, wherein the surface form is associated in the surface form reference with the plurality of different reference named entities each being formed of a different set of words, and each of the different reference named entities is associated with a named entity reference, the named entity references being stored in a named entity reference database that is separate from the surface form reference database, each of the named entity references associating one of the different reference named entities to multiple entity indicators, the entity indicators including both labels applied to a respective named entity in an information resource, and context indicators applied to the respective named entity in the information resource, in which the labels comprise classifying identifiers applied to the respective named entities in the information resource; evaluating, with the processor, one or more measures of correlation between one or more of the entity indicators in the information resource for each of the identified reference named entities, and the text, the evaluation including comparisons of the text to both the labels and the context indicators; identifying, with the processor, one of the reference named entities for which the associated entity indicators have a relatively high correlation to the text; and providing a disambiguation output that indicates the identified reference named entity to be associated with the surface form of the named entity in the text. |
9734821 | 14755854 | 1 | 1. A method performed in a computer system, for testing words defined in a pronunciation lexicon used in an automatic speech recognition system that is included in the computer system, wherein the method comprises: obtaining a plurality of test sentences which can be accepted by a language model used in the automatic speech recognition system, wherein the test sentences include a plurality of the words defined in the pronunciation lexicon; obtaining variations of speech data corresponding to each of the test sentences; obtaining a plurality of texts by recognizing the variations of speech data, or a plurality of texts generated by recognizing the variations of speech data; constructing a word graph, using the plurality of texts, for each of the test sentences, wherein each word in the word graph corresponds to each of the words defined in the pronunciation lexicon; and determining whether (i) all or (ii) parts of words in a test sentence of the test sentences are present in a path of the word graph derived from the test sentence. |
20120136985 | 12955007 | 0 | 1. A method comprising: receiving a plurality of items of user-generated content; based at least in part on the user-generated content of two or more items in the plurality of items, determining that a set of items including the two or more items reflects a controversial event related to an entity of a plurality of entities, wherein at least one other set of items of the plurality of items refers to another entity of the plurality of entities; wherein the method is performed by one or more special-purpose computing devices. |
20080288251 | 12180797 | 0 | 1. A method, performed on a computer system, for tracking time using speech recognition, the method comprising the steps of: accessing speech data; recognizing at least two voice commands from the speech data, each voice command occurring at a different time; determining a first time associated with a speaking of a first of the voice commands, wherein said first voice command identifies a start of a time interval; determining a second time associated with a speaking of a second of the voice commands, wherein said second voice command identifies an end of said time interval; and storing data identifying said time interval and data identifying one or more of said first voice command and second voice command, wherein the speech data comprises a time stamp; the step of determining a first time comprises: determining an offset time between the time stamp and a time when the first voice command is spoken; and determining the first time through reference to the time stamp and the offset time. |
6101473 | 08907628 | 1 | 1. A remote server to enable a local user to increase the functionality of a local browser having a graphical user interface, comprising: a remote web browser residing on the remote server; a speech controller electronically coupled to said remote web browser, said controller being configured to form control links coupling the local browser to said remote browser via an Internet data communication link to enable said remote web browser and the local browser to function cooperatively; and a speech server having a speech recognition function residing on the remote server, said speech server coupling said controller to a telephone network so that a telephonic voice communication link may be established between the user and said controller; wherein voice commands to control browsing may be input via said telephonic voice communication link and wherein graphical user interface commands to control browsing may also be input via the local browser. |
8055298 | 12854899 | 1 | 1. A communication device comprising: a microphone; a speaker; an input device; a display; a camera; a wireless communicating system; a voice communicating implementer to implement voice communication by utilizing said microphone and said speaker; an automobile controlling implementer, by which said communication device remotely controls, in response to an automobile controlling command input via said input device, an automobile; a caller ID implementer which retrieves a predetermined color data and/or sound data which is specific to the caller of the incoming call received by said communication device, and outputs the color and/or sound corresponding to said predetermined color data and/or sound data from said communication device; an auto time adjusting implementer which automatically adjusts the clock of said communication device in accordance with a wireless signal received by said wireless communication system; a calculating implementer which implements mathematical calculation by utilizing digits input via said input device; a word processing implementer which includes a bold formatting implementer, an italic formatting implementer, and/or a font formatting implementer, wherein said bold formatting implementer changes alphanumeric data to bold, said italic formatting implementer changes alphanumeric data to italic, and said font formatting implementer changes alphanumeric data to a selected font; a startup software implementer, wherein a startup software identification data storage area stores a startup software identification data which is an identification of a certain software program selected by the user, and when the power of said communication device is turned on, said startup software implementer retrieves said startup software identification data from said startup software identification data storage area and activates said certain software program; a stereo audio data playback implementer which playbacks and outputs in a stereo fashion the audio data selected by the user of said communication device; a digital camera implementer, wherein a photo quality identifying command is input via said input device, and when a photo taking command is input via said input device, a photo data retrieved via said camera is stored in a photo data storage area w ith the quality indicated by said photo quality identifying command; a multiple language displaying implementer, wherein a specific language is selected from a plurality of languages, and the interface to operate said communication device is displayed with said specific language; a caller's information displaying implementer which displays a personal information regarding caller on said display when said communication device receives a phone call; a communication device remote controlling implementer. wherein said communication device is remotely controlled by a computer via a network; a shortcut icon displaying implementer, wherein a shortcut icon is displayed on said display, and a software program indicated by said shortcut icon is activated when said shortcut icon is selected; and a multiple channel processing implementer which sends data in a wireless fashion by utilizing multiple channels. |
20020010587 | 09387416 | 0 | 1. A method for detecting nervousness in a voice in a business environment, comprising the steps of: (a) receiving voice signals from a person during a business event; (b) analyzing the voice signals for determining a level of nervousness of the person during the business event; and (c) outputting an indication of the level of nervousness of the person prior to completion of the business event. |
9800727 | 15294472 | 1 | 1. A method for automated routing of voice calls using time-based predictive clickstream data, the method comprising: capturing, by a server computing device at a first point in time, clickstream data corresponding to one or more web browsing sessions between a client computing device and a web server, the clickstream data comprising uniform resource locators (URLs) and one or more timestamps of the corresponding session; converting, by the server computing device, the clickstream data into tokens, comprising filtering the URLs to retain intent-relevant URLs; parsing each intent-relevant URLs into one or more tokens, each token comprising a discrete text segment of the corresponding URL; assigning a time value to each token that is associated with at least one of the timestamps from the corresponding web browsing session; generating, by the server computing device, a frequency matrix based upon the tokens, the frequency matrix comprising for each token and web browsing session (i) a frequency of the token appearing in the intent-relevant URLs in the session (“TF”) and (ii) a log transform of an inverse of a ratio of number of distinct intent-relevant URLs that include the token over the number of intent-relevant URLs in the session (“IDF”); generating, by the server computing device, a feature vector based upon the frequency matrix, the feature vector comprising for each token a value indicating a product of TF and IDF; receiving, by the server computing device at a second point in time, an incoming voice call from a remote device; identifying, by the server computing device, that the remote device is associated with a user of the client computing device; determining, by the server computing device, intent for the incoming voice call based upon the feature vector; and routing, by the server computing device, the incoming voice call to a destination device based upon the determined intent. |
20130185049 | 13348995 | 0 | 1. A method for determining a dropped pronoun from a source language, wherein the method comprises: collecting parallel sentences from a source and a target language; creating at least one word alignment between the parallel sentences in the source and the target language; mapping at least one pronoun from the target language sentence onto the source language sentence; computing at least one feature from the mapping, wherein the at least one feature is extracted from both the source language and the at least one pronoun projected from the target language; and using the at least one feature to train a classifier to predict position and spelling of at least one pronoun in the target language when the at least one pronoun is dropped in the source language; wherein at least one of the steps is carried out by a computer device. |
20120078615 | 13245759 | 0 | 1. A computer-implemented method for generating text using a touch-sensitive keyboard, comprising: receiving touch input from a first plurality of simultaneous touchpoints; determining a text character for each respective simultaneous touchpoint based on the touch input; and generating a text word, with a computing device, based on the text characters determined from the first plurality of simultaneous touchpoints. |
9165185 | 14256228 | 1 | 1. A computer-implemented method for providing a text-based representation of a region of interest of an image to a user, the method comprising the steps of: identifying text zones within the image, each text zone comprising textual content and having a respective rank assigned thereto based on an arrangement of the text zones within the image; determining a processing sequence for performing optical character recognition on the text zones, the processing sequence being based, firstly, on an arrangement of the text zones with respect to the region of interest and, secondly, on the ranks assigned to the text zones; and performing an optical character recognition process on the text zones according to the processing sequence to progressively obtain a machine-encoded representation of the region of interest, and concurrently present said machine-encoded representation to the user, via an output device, as the text-based representation. |
20100279666 | 12434518 | 0 | 1. A method in a mobile device for transmitting context information during establishment of a voice call to a destination device, the method comprising: receiving a request from a user of the mobile device to initiate a voice call to a destination device; launching an application that facilitates transmitting context information when establishing the voice call between the mobile device and the destination device, wherein the launched application is configured to: present a user-selectable list of at least two types of content options to transmit as context information when establishing the voice call, wherein the user-selectable list of options includes: an option to transmit a photograph; an option to transmit a video presentation; an option to transmit a text-based phrase; or an option to transmit an audio clip; receive a selection from the user of one of the options from the user-selectable list of options; and receive input associated with the type of context information related to the selected option; establishing a session initiated protocol communication session between the mobile device and the destination device; transmitting the context information associated with the received input to the destination device over the session initiated protocol communication session; and initiating a voice call to the destination device over the session initiated protocol communication session a certain time period after transmission of the context information to the destination device. |
20150096040 | 14042311 | 0 | 1. A computer-implemented method for tokenizing data, comprising: accessing, by a computer, a vector table comprising one or more columns of vectors; modifying first sensitive data using one or more vector table columns to create first modified data; tokenizing the first modified data to form first tokenized data; accessing, by the computer, an updated vector column; replacing, by the computer, a vector table column with the updated vector column to create an updated vector table; modifying second sensitive data using one or more updated vector table columns to create second modified data; and tokenizing the second modified data to form second tokenized data. |
9940931 | 15201188 | 1 | 1. A computer-implemented method comprising: under control of a computing device configured with specific computer-executable instructions, generating audio data comprising speech; transmitting the audio data to a remote computing system including a speech recognition engine; receiving, from the remote computing system, a plurality of transcription results for a portion of a transcription of the speech, wherein the transcription has been generated from the audio data by the speech recognition engine; receiving, from the remote computing system, a confidence level for each transcription result of the plurality of transcription results, wherein the confidence level for each transcription result has been generated by the speech recognition engine, and wherein the confidence level for each transcription result of the plurality of transcription results represents a confidence in an accuracy of the transcription result; determining a ranked order for the plurality of transcription results from the confidence levels of the plurality of transcription results; presenting the plurality of transcription results for the portion of the transcription in the ranked order, with each transcription result of the plurality of transcription results presented with the confidence level for the transcription result; and receiving a selection, from the plurality of transcription results, of a first transcription result for the portion of the transcription. |
20160344868 | 15230032 | 0 | 1. A non-transitory computer readable medium comprising a plurality of instructions stored therein adapted to generate a customer satisfaction score based on behavioral assessment data, the plurality of instructions comprising: instructions that, when executed, analyze one or more communications between a customer and an agent, wherein the analysis comprises instructions that, when executed, apply a linguistic-based psychological behavioral model to each communication to determine a personality type of the customer by analyzing behavioral characteristics of the customer based on the one or more communications; instructions that, when executed, select at least one filter criterion which comprises a customer, an agent, a team, or a call type; instructions that, when executed, calculate a customer satisfaction score using the at least one selected filter criterion across a selected time interval and based on one or more communications; and instructions that, when executed, display a report including the calculated customer satisfaction score to a user that matches the at least one selected filter criterion for the selected time interval. |
5435564 | 08173643 | 1 | 1. An electronic word building dictionary machine comprising: keyboard means to input a user determined set of letters, a set of words in memory, comparison means to compare said input set of letters with said set of words in memory to provide a set of matching words from said set of words in memory, said comparison means including means for treating said variable letter member as a sequence of letters of the alphabet, said sequence at least one letter of the alphabet and said set of matching words comprising words which consist only of a subset of letters from said input set of letters, ranking means to provide a predetermined score for each of said words in said set of matching words, display means to display on said machine each of said words in the sequence of value of said score together with the score value of the word being displayed, wherein said keyboard means has a second input key representing a variable number of letters to provide a variable letter member of said user determined set of letters. |
7779398 | 11149052 | 1 | 1. A method comprising: extracting, with a preprocessor in a computer system, macroinstructions that are hard-coded into parser code of a command line interface (CLI) parser, wherein the macroinstructions define a proper CLI syntax for CLI commands input to a CLI prompt and include parse nodes used by the CLI parser to analyze whether one or more CLI commands input to the CLI prompt have proper CLI syntax, and wherein the macroinstructions are written according to a first computer system language; converting, with the preprocessor, the macroinstructions into at least one parse graph having an Extensible Markup Language (XML) format, wherein the converting includes encapsulating the parse nodes of the macroinstructions with XML tags and stitching together the encapsulated parse nodes of the macroinstructions to generate the parse graph, and wherein the first computer system language is different than an Extensible Markup Language (XML) language associated with the parse graph; and generating, with the preprocessor, an exportable representation of the at least one parse graph and outputting the exportable representation of the at least one parse graph from the computer system. |
20050257173 | 11156873 | 0 | 1. A multimodal system for controlling electronic components, comprising: a general purpose computing system which is in communication with said electronic components via a computer network, said electronic components being separate from the computing system; a computer program comprising program modules executable by the computing system, said program modules comprising an object selection module, a gesture recognition module, and a speech control module, each of which provides inputs to an integration module that integrates said inputs to arrive at a unified interpretation of what component a user wants to control and what control action is desired. |
20120101735 | 13278658 | 0 | 1. A method for recognizing an emotion of an individual based on Action Units (AUs), the method comprising: receiving an input AU string including one or more AUs that represents a facial expression of an individual from an AU detector; matching the input AU string with each of a plurality of AU strings, wherein each of the plurality of AU strings includes a set of highly discriminative AUs, each representing an emotion; identifying an AU string from the plurality of AU strings that best matches the input AU string; and outputting an emotion label corresponding to the best matching AU string that indicates the emotion of the individual. |
20110081073 | 12898916 | 0 | 1. A method for generating an ensemble classifier, the method comprising: transforming, automatically with a processor, multidimensional training data into a plurality of response planes according to a plurality of recognition algorithms, wherein each of the response planes comprise a set of confidence scores; transforming, automatically with the processor, the response planes into a plurality of binary response planes, wherein each of the binary response planes comprise a set of binary scores corresponding to one of the confidence scores; transforming, automatically with the processor, a first combination of the binary response planes into a first set of diversity metrics according to a first diversity measure; transforming, automatically with the processor, a second combination of the binary response planes into a second set of diversity metrics according to a second diversity measure; selecting a first metric from the first set of diversity metrics; selecting a second metric from the second set of diversity metrics; generating, automatically with the processor, a predicted performance of a child combination of the recognition algorithms corresponding to the first combination and the second combination, wherein the predicted performance is based at least in part upon the first metric and the second metric; selecting parent recognition algorithms from the recognition algorithms based at least in part upon the predicted performance; and generating the ensemble classifier, wherein the ensemble classifier comprises the parent recognition algorithms. |
20110224980 | 13044737 | 0 | 1. A speech recognition system comprising: a sound source separating section which separates mixed speeches from multiple sound sources from one another; a mask generating section which generates a soft mask which can take continuous values between 0 and 1 for each frequency spectral component of a separated speech signal using distributions of speech signal and noise against separation reliability of the separated speech signal; and a speech recognizing section which recognizes speeches separated by the sound source separating section using soft masks generated by the mask generating section. |
7490081 | 11334659 | 1 | 1. A computer implemented method for automatic identification and notification of relevant program defects, the computer implemented method comprising: receiving a program defect description from a user of a defect database; responsive to determining that the program defect description is new, creating an event record of the program defect description; extracting each word and each phrase of the program defect description in sequential order from the program defect description, wherein the each phrase comprises at least two words; determining whether the each word and the each phrase are included in a defect dictionary on the defect database; responsive to locating the each word and each phrase in the defect dictionary, updating the defect dictionary to include the each word and the each phrase from the program defect description; responsive to the absence of the each word and the each phrase in the defect dictionary, adding the each word and the each phrase to the defect dictionary; searching a plurality of defect databases for the each word and the each phrase; responsive to locating at least one defect database among the plurality of defect databases containing the each word and the each phrase, calculating a final word relevancy percentage for the each word and the each phrase, wherein calculating a final word relevancy percentage for the each word and the each phrase further comprises: calculating an initial word relevancy percentage for the each word and the each phrase within the program description, wherein determining the initial word relevancy percentage comprises using a record maintained in the defect dictionary, wherein the record indicates relevancy of the each word and the phrase, wherein determining the relevancy is comprises determining how often users use the each word and the each phrase in the defect descriptions; receiving a defect database relevancy ranking table, wherein the defect database relevancy ranking table lists each defect database in the plurality of defect databases, wherein the each defect database listed is associated with a relevancy percentage assigned by the user; receiving a program component factor table, wherein the user assigns relevancy percentages to each program component based on relevancy to the program defect description, wherein the each program component comprises a set of components in a program; receiving the source factor percentages assigned by the user to the defect databases, wherein the source factor percentages are percentages assigned by the user depending on whether the defect databases are open source databases or closed source databases; calculating a final word relevancy percentage for the each word and the each phrase using the initial word relevancy percentage, the defect database relevancy ranking table, the program component factor table, and the source factor percentages in the calculation of the final word relevancy percentage for the each word and the each phrase; and sending relevant defects and the final word relevancy percentage to a program developer to repair the program, wherein the program developer utilizes the relevant defects. |
20120150536 | 12964433 | 0 | 1. A method comprising; obtaining access to a large reference acoustic model for automatic speech recognition, said large reference acoustic model having L states modeled by L mixture models, said large reference acoustic model having N components; identifying a desired number of components N c , less than N, to be used in a restructured acoustic model derived from said reference acoustic model, said desired number of components N c being selected based on a computing environment in which said restructured acoustic model is to be deployed, said restructured acoustic model also having L states; for each given one of said L mixture models in said reference acoustic model, building a merge sequence which records, for a given cost function, sequential mergers of pairs of said components associated with said given one of said mixture models; assigning a portion of said N c components to each of said L states in said restructured acoustic model; and building said restructured acoustic model by, for each given one of said L states in said restructured acoustic model, applying said merge sequence to a corresponding one of said L mixture models in said reference acoustic model until said portion of said N c components assigned to said given one of said L states is achieved. |
20110077046 | 12962967 | 0 | 1. A device for placing an order in a wireless telecommunications network, the device comprising: an input interface; an output interface; a transceiver; a processor in communication with the input interface, the output interface, and the transceiver; and a memory in communication with the processor, the memory being configured to store instructions that, when executed, make the processor operable to: responsive to receiving a search parameter via the input interface, generate a search query for content, the search query comprising the search parameter; transmit, via the transceiver, the search query to a server by way of the wireless telecommunications network; receive, from the server via the transceiver as at least one multimedia messaging service (MMS) message, at least one bundle of multimedia content related to an entity identified by the search parameter, the bundle of multimedia content being compiled into an interactive multimedia presentation; present, via the output interface, the interactive multimedia presentation, the interactive multimedia presentation comprising at least one item associated with the entity; receive, via the input interface, an input, the input comprising a selection of at least one of the at least one item presented in the interactive multimedia presentation; and transmit, via the transceiver, an order to the entity, the order comprising the item selection. |
9503556 | 14898692 | 1 | 1. An apparatus having at least one processor and at least one memory having computer-readable code stored thereon which when executed controls the at least one processor to perform a method comprising: while providing two-way communication in a voice call, detecting whether a speaker component of a voice communications device is in a state of being moved away from a user's ear; in response to detecting that the speaker component of the voice communications device is in a state of being moved away from a user's ear, entering a line activity mode; in the line activity mode, determining whether voice activity is present on the inbound channel of the voice call, wherein determining whether voice activity is present on the inbound channel of the voice call is performed by speech analysis on speech content on the inbound channel and determining that a predetermined phrase is present; and in response to determining the presence of voice activity on the inbound channel of the call when in the line activity mode, causing announcement of the detection of voice activity. |
8385523 | 11232483 | 1 | 1. A method for voice message retrieval comprising: a voice mailbox provider receiving a selection of a caller that is to be associated with a first voice message and also receiving caller information for the caller that is to be associated with the first voice message mailbox, the voice mailbox provider comprising at least two voice message mailboxes comprising the first voice message mailbox and a second voice message mailbox; the voice mailbox provider assigning the caller to the first voice message mailbox and associating the caller information with the caller so that a voice message left by the caller is retrievable from the first voice message mailbox by the voice mailbox provider based on the caller information; receiving an incoming communication from the caller associated with the first voice message mailbox; the voice mailbox provider detecting the caller information associated with the caller to identify the caller; the voice mailbox provider receiving a voice message from the caller associated with the first voice message mailbox; the voice mailbox provider associating the voice message with the caller information assigned to the first voice message mailbox and storing the voice message in the first voice message mailbox; a communication device presenting an indicator to a user of the communication device, the indicator displaying information indicating that the voice message from the caller associated with the first voice message mailbox was received by the voice mailbox provider; the communication device transmitting a request to access the voice message of the caller to the voice mailbox provider, the request comprising the caller information associated with the caller; the voice mailbox provider retrieving the voice message from the first voice message mailbox based on the caller information provided in the request received from the communication device; and the voice mailbox provider transmitting the voice message to the communication device for output at the communication device. |
4172668 | 05749369 | 1 | 1. An apparatus for mixing fluids which comprises: a housing affording a substantially annular mixing chamber; an annular rotor mounted for rotation in the mixing chamber, the rotor having one or more inlet orifices disposed at or adjacent its axis of rotation; an inlet chamber communicating with the inlet orifices in the rotor and disposed at or adjacent the axis of rotation of the rotor, the inlet chamber being provided with inlet means for the fluids to be mixed; outwardly extending enclosed radial passages being located in the rotor, leading from said inlet orifices to the periphery of the rotor and emerging therethrough and comprising a V-shaped convergent inlet end for impeding flow of the fluids, a V-shaped divergent outlet end for causing a pressure drop in the fluids, and a parallel side throat portion interconnecting said V-shaped inlet end and said V-shaped outlet end of said enclosed radial passages; a circular outer wall extending around a major portion of the circumference of the mixing chamber with a small clearance between said circular outer wall and the periphery of the rotor, the circular outer wall extending outwardly into a spiral shape so as to define a generally crescent shaped outlet region between the spiral shaped wall of the mixing chamber and the periphery of the rotor; an outlet passage communicating with the outlet region; and, drive means connected to the rotor so as to rotate the rotor in the mixing chamber. |
9213687 | 13167640 | 1 | 1. A computer implemented search engine which automatically generates a plurality of search results from an input text and a query text, the search engine comprising a processor, a search index for retrieving matches in meaning between an input text and query text, and a user interface for receiving the query text and displaying search engine results, the method comprising: receiving the input text and query text; performing, via the processor, and a token-by-token analysis of the query text and input text, a computation of a map of sentiment valences to successive areas of text therein; computing, via the processor, a summation of a negative area from negative valences and a summation of a resolution area from positive valences, based on the map of sentiment valences; computing an unacceptable area, via the processor, based on a difference between the negative area and the positive area; computing an unacceptable running area, via the processor, based on an excess of the unacceptable area which is beyond a max acceptable running imbalance area; computing, via the processor, an acceptable negative area for 60% resolution, based on the positive area, wherein the acceptable negative area is within 60% of the positive area; computing a total compassion, via the processor, based on the acceptable negative area, the negative area, the positive area, wherein total compassion is the subtraction of the unacceptable running area from the sum of the positive area and the acceptable negative area; computing, an array of paragraph clusters ordered by compassion in similar paragraphs, via the processor, based on the total compassion in token-by-token analysis of the query text and input text; computing a query cluster compassion trigrams index based on the array of paragraph clusters ordered by compassion in similar paragraphs; retrieving a set of segment results sorted by relevance based on search engine intersection of input text and query text within the query cluster compassion trigrams index; outputting, via the user interface, the set of segment results sorted by relevance; wherein the computer analysis includes a classification, a categorization, or sorting of the segment results ordered by compassion. |
20140233713 | 14184024 | 0 | 1. A method, comprising: after receiving an amount of data for a voicemail message that is above a first threshold amount, transcribing, by a computing device, a first segment of the data for the voicemail message to first text and transmitting an e-mail that comprises the first text to an intended recipient of the voicemail message; and after receiving an amount of the data for the voicemail message that is above a second threshold amount, transcribing a second segment of the data for the voicemail message to second text and transmitting a message that comprises the second text to the intended recipient of the voicemail message. |
9740678 | 14750185 | 1 | 1. A computer-implemented method of automatic speech recognition, comprising: obtaining, via at least one acoustic signal receiving unit, audio data including human speech; generating, via a decoder, a static vocabulary weighted finite state transducer (WFST) having nodes connected by arcs to propagate at least one token through the static vocabulary WFST and at least one dynamic vocabulary trigger marker at at least one of the arcs; propagating, via the decoder, a token through at least one dynamic vocabulary WFST upon the at least one token reaching the trigger marker; propagating, via the decoder, a token through at least one grammar WFST having at least one dynamic vocabulary class marker that indicates a type of dynamic vocabulary and is associated with the dynamic vocabulary of at least one of the dynamic vocabulary WFSTs with a propagating token; providing, via the decoder, a hypothetical word or phrase based at least in part on the obtained human speech and depending, at least in part, on the WFSTs and comprising terms in the static vocabulary, dynamic vocabulary, or both vocabularies; determining, via an interpretation engine, user intent based at least in part on output from the decoder based at least in part on the hypothetical word or phrase; and initiating, via the interpretation engine, a response or action based at least in part on the determined user intent, the initiated response or action being implemented via speech output from a speaker component, via visual output from display component, and/or via other action from one or more end devices. |
8983383 | 13626624 | 1 | 1. An apparatus comprising: a microphone; a speaker; one or more processors; and a hands-free module executable by the one or more processors to: establish a first hands-free service connection to a first wireless device and a second hands-free service connection to a second wireless device, wherein the hands-free module provides hands-free functions, including at least initiating phone calls and answering phone calls via the first hands-free service connection and the second hands-free service connection; utilize the first hands-free service connection to establish an active audio communication channel with the first wireless device; disconnect the second hands-free service connection at least partly in response to establishment of the active audio communication channel with the first wireless device, the hands-free module refraining from re-establishing the second hands-free service connection with the second wireless device during a time that the active audio communication channel with the first wireless device is active; transmit, to the first wireless device and via the active audio communication channel, audio data generated based, at least in part, on input provided by the microphone; and cause the speaker to produce sound based, at least in part, on audio data received via the active audio communication channel to the first wireless device; and a hands-free routing module executable by the one or more processors to route the active audio communication channel to another apparatus having a second microphone and a second speaker at least partly in response to receiving an indication from the other apparatus that a first user associated with the first wireless device is closer to the other apparatus than to the apparatus. |
9406310 | 13345531 | 1 | 1. A vehicle voice interface system calibration method comprising: electronically convolving voice command data with voice impulse response data representing a voice acoustic signal path between an artificial mouth simulator present in a passenger compartment of a vehicle and a first microphone present in the passenger compartment, to simulate a voice acoustic transfer function pertaining to the passenger compartment of the vehicle; electronically convolving audio system output data with feedback impulse response data representing a feedback acoustic signal path between a vehicle audio system output and a second microphone present in the passenger compartment, to simulate a feedback acoustic transfer function pertaining to the passenger compartment of the vehicle; combining a voice electrical signal representing the simulated voice acoustic transfer function and a feedback electrical signal representing the simulated feedback acoustic transfer function into a combined electrical signal; providing the combined electrical signal to a microphone electrical input of the vehicle voice interface system; and calibrating the vehicle voice interface system to recognize voice commands represented by the voice command data based on the combined electrical input signal. |
8615397 | 12098016 | 1 | 1. A method for identifying audio content, comprising: receiving a data stream from an electronic device via a communication network, wherein the data stream includes training data and audio content that is to be identified, and wherein the training data and the audio content in the data stream are distorted by dynamic characteristics of the electronic device and the communication network; determining, from the received distorted training data in the data stream, dynamic characteristics of the electronic device and the communication network; dynamically distorting, using a computer, a set of target patterns for identifying the distorted audio content based on the determined dynamic characteristics of the electronic device and the communication network, wherein dynamically distorting the set of target patterns comprises using an encoding technique to perform the distortion; and identifying the distorted audio content in the data stream based on the set of distorted target patterns, wherein the distorted audio content spectrum matches with the set of distorted target patterns. |
3975598 | 05361107 | 1 | 1. An apparatus for random access by an electron beam to analog data storage tracks, said apparatus comprising: target means having a recording of a plurality of spaced-apart storage tracks defining essentially permanent recordings of analog data; means developing an electron beam to scan along a storage track defined by said target means for producing an output electrical signal corresponding to the recordings of analog data; first control means to direct said electron beam onto only a selected one of said storage tracks defined by the target means; second control means to direct said electron beam along only the selected one of said storage tracks defined by the target means; means for oscillating said electron beam at a dither frequency having an amplitude corresponding essentially to the width of the selected one of said analog data storage tracks; and a digitally addressible control means coupled to said means for oscillating and said first and second control means to select and then scan along a storage track for producing an electrical signal corresponding to the recording of analog data, said digitally addressible control means enabling said first control means to direct said electron beam from track-to-track onto the selected one of said storage tracks, said digitally addressible control means further enabling said means for oscillating and said second control means to direct said electron beam along the storage track while the beam oscillates at the dither frequency and thereby scan the selected one of said storage tracks. |
20170133010 | 15269924 | 0 | 1. A computer-implemented method for recognizing and understanding spoken commands that include one or more proper name entities, comprising: receiving an utterance from a user; performing primary automatic speech recognition (ASR) processing upon said utterance with a primary automatic speech recognizer to output a dataset comprising at least a sequence of nominal transcribed words and putative start and end times for each nominal transcribed word within said utterance; performing understanding processing upon said dataset with a natural language understanding (NLU) processor to generate and augment the dataset with a nominal meaning for the utterance and to determine putative presence and type of one or more spoken proper name entities within said utterance, wherein a contiguous section of audio within said utterance corresponding to each putative proper name entity, as determined from said start and end times of the words of the putative proper name entity as transcribed by the primary automatic speech recognizer, comprises an acoustic span; performing secondary automatic speech recognition (ASR) processing upon each said acoustic span with a secondary automatic speech recognizer, in each instance said secondary automatic speech recognizer specialized to process a given putative type of acoustic span to generate a nominal correct transcription and associated meaning for each said acoustic span; substituting the nominal correct transcription and associated meaning obtained from each secondary recognition as appropriate within the dataset to revise the results of the primary automatic speech recognizer and natural language understanding processor and to create a plurality of complete transcriptions and associated meanings; preparing a complete hypothesis ranking grammar comprised of said plurality of complete transcriptions and decoding the utterance against said complete hypothesis ranking grammar to determine an acoustic confidence score for each complete transcription; determining, for each acoustic span of each complete transcription, an NLU confidence score for each transcription of each acoustic span; normalizing said NLU confidence scores across the plurality of complete transcriptions to determine a normalized NLU confidence score of each complete transcription; combining said acoustic confidence score and NLU confidence score of each complete transcription to generate a final confidence score that each complete transcription and associated meaning is correct, which is used to rank the plurality of aforesaid complete transcriptions and associated meanings; and outputting a ranked list of complete transcriptions and associated meanings for the entire utterance. |
8554556 | 13001334 | 1 | 1. A method of performing voice activity detection, comprising: receiving a first signal from a first microphone, the first signal including a first target component and a first disturbance component; receiving a second signal from a second microphone displaced from the first microphone by a distance, the second signal including a second target component and a second disturbance component, wherein the first target component differs from the second target component in accordance with the distance, and wherein the first disturbance component differs from the second disturbance component in accordance with the distance; estimating a first signal level based on the first signal; estimating a second signal level based on the second signal; estimating a first noise level based on the first signal; estimating a second noise level based on the second signal; calculating a first ratio based on the first signal level and the first noise level; calculating a second ratio based on the second signal level and the second noise level; calculating a current voice activity decision, wherein the current voice activity decision signifies that no voice activity is detected if a difference between the first ratio and the second ratio is smaller than a pre-selected threshold, wherein the threshold is (1−p) ξmin, wherein p is a propagation decay factor and wherein ξmin is a pre-selected minimum SNR threshold for voice presence at the microphone closer to the target sound, and wherein the current voice activity decision signifies that voice activity is detected if the difference is larger than or equal to the pre-selected threshold; and selectively transmitting the first signal according to the current voice activity decision. |
8160876 | 10953712 | 1 | 1. A method for updating a first speech model in a speech recognition system, comprising: identifying that a user device of a user is in communication with the speech recognition system via a network connection; receiving from the user device, the user device comprising a personal speech model trained for the user through previous speech recognition operations, one or more personal speech model components of the personal speech model trained for the user through previous speech recognition operations, the one or more personal speech model components describing personal speech characteristics of the user, wherein the one or more personal speech model components are received from the user device over the network connection in response to identifying that the user device is in communication with the speech recognition system; updating the first speech model using at least some of the one or more personal speech model components, by modifying at least one speech model component of the first speech model and/or adding at least one speech model component to the first speech model; and performing speech recognition on user speech using the first speech model updated with the at least some of the one or more personal speech model components. |
4439161 | 06301090 | 1 | 1. A learning aid comprising: control means having defined therein a response data set and a set of messages, said control means having means for selecting a first response data from said set of response data in response to an operator generated problem, and for selecting a second response data from said set of response data in response to an operator generated evaluation input, memory means for storage of said response data set; said means for selecting a first response data including means for randomly choosing an incorrect response data from said response data set as said first response; said means for selecting a second particular response data including means for evaluating if said first response data is appropriate with said operator generated problem; and operator interface means communicating with said control means, said operator interface means having means for receiving said operator generated problem, for receiving said operator generated evaluation data, and for communicating to the operator said first response data and said second response data. |
9454525 | 13897780 | 1 | 1. A method of extracting information from a text input received by a natural language understanding system, comprising: parsing the text input to extract a plurality of features from the text input; processing each of the plurality of features through a plurality of statistical models to obtain at least one value; searching the text input for at least one named entity; determining a value for a feature based upon the at least one named entity located within the text input; combining, via a processor, one value for each of the plurality of features to create a complex information target; and outputting the complex information target, wherein the complex information target indicates a meaning for the text input. |
20020120598 | 09791579 | 0 | 1. A method for encoding semi-structured data, comprising: a) Providing a semi-structured data input; b) obtaining an encoded semi-structured data by selectively encoding at least part of said semi-structured data into strings of arbitary length in a way that (i) maintains non-structural and structural information associated with the semi-structured data, and (ii) the so encoded semi-structured data can be indexed for efficient access. |
9710555 | 13035286 | 1 | 1. A method for determining that multiple social media profiles correspond to a common entity, the method comprising: identifying, by a social media analysis tool executed by a processor, a first profile for a first social media platform, a second profile for a second social media platform, and a third profile for a third social media platform, wherein the first and the third profile have a first attribute, a second attribute, and a third attribute, wherein the second profile has the first attribute and lacks the second and third attributes, wherein the second attribute describes an entity name and the third attribute describes a geographical location; determining, by the social media analysis tool based on the first profile and the second profile having different values for the first attribute, that a probability of both the first profile and the second profile corresponding to the common entity is less than a threshold; determining, by the social media analysis tool, a uniqueness score for a combination of values of the second attribute and the third attribute in the first profile, wherein the uniqueness score indicates a likelihood of the second and third attributes having the combination of values, wherein determining the uniqueness score comprises determining a uniqueness of a name identified by the entity name from the second attribute being associated with the geographical location from the third attribute; selecting, by the social media analysis tool based on the determined uniqueness score, the second and third attributes as a basis of comparison between the first profile and the third profile; matching, by the social media analysis tool, the first profile to the third profile based on comparing the values in the first profile for the selected second and third attributes with values in the third profile for the selected second and third attributes; matching, by the social media analysis tool, the second profile and the third profile based at least in part on the second and third profiles having corresponding values for the first attribute; and updating, by the social media analysis tool, the probability of both the first profile and the second profile corresponding to the common entity so that the probability exceeds the threshold, wherein the probability is updated based on matching the first profile to the third profile using the second and third attributes and matching the second profile to the third profile using the first attribute. |
8694560 | 13191391 | 1 | 1. A meter management system, comprising: a storage device configured to store a first table definition language configured to define attributes of an electrical utility meter and to store a table definition language fragment, comprising attributes, functionalities, or both to be modified in the first table definition language, to create a second table definition language; and a processor configured to create the second table definition language based at least upon applying attributes, functionalities, or both of the first table definition language that are not found in the table definition language fragment to the attributes, functionalities, or both from the table definition language fragment without altering the first table definition language; wherein the second table definition language is created at a time before a desired modification is to be applied from the second table definition language to an electrical utility meter. |
8713037 | 13173582 | 1 | 1. A translation method comprising: receiving an input query in a source language; and outputting a target query, the target query being identified from a set of candidate target queries, each target query being based on a translation of the input query into a target language, different from the source language, with a machine translation system which includes a reranking model for ranking the candidate target queries, which has been trained by a method which includes: for each of a plurality of training queries in the source language, translating the training query in the source language into the target language to generate translated queries which are each a translation of the respective training query; for each of the translated queries: computing a feature representation of the translated query; retrieving a set of annotated documents from a document collection in response to the translated query, the documents in the retrieved set of annotated documents including annotations that are based on responsiveness of each of the documents to each of the training queries, and computing a precision score for the translated query based on relevance scores of the retrieved documents in the set of annotated documents, each of the relevance scores being based on the annotations of the documents in the retrieved set of annotated documents; and learning feature weights for the reranking model based on the precision scores and feature representations of the translated queries. |
20070100837 | 11262866 | 0 | 1. A secured enterprise printing system that utilizes the Java Bean application programming interface and operable over a data network, containing: a Web interface enabling the creation, access to, and manipulation of enterprise print beans (EPBs) over a data network; a J2EE container storing and securing EPBs; a licensed controller authorizing access to EPBs; an information database tracking EPB creation, access and manipulation activity; and an input output terminal for printing a print job associated with an EPB. |
9870770 | 14931459 | 1 | 1. A voice recognition system in a vehicle, comprising: a first microphone mounted in the vehicle that collects voice data of an occupant of the vehicle; a second microphone provided in a mobile device of the occupant that collects voice data of the occupant; and a voice recognition device connected to the mobile device through local wireless communication including a noise elimination portion eliminating noise in the voice data collected by the first microphone or the second microphone and a voice recognition portion performing voice recognition using the voice data from which noise is eliminated by the noise elimination portion, wherein the voice recognition portion further includes: a feature extraction portion extracting a reference parameter of the voice data from which noise is eliminated by the noise elimination portion; a parameter setting portion adjusting the reference parameter based on a recognition rate of the reference parameter; a storage portion storing voice recognition data; and a meaning extraction portion extracting meaning of the voice data from which noise is eliminated by the noise elimination portion by comparing the reference parameter to the voice recognition data, wherein the parameter setting portion calculates the recognition rate when a summation of a number of recognition and a number of misrecognition is greater than a predetermined value, wherein the parameter setting portion increases the reference parameter until the reference parameter reaches a predetermined maximum value when the recognition rate is less than a first threshold value, and then the meaning extraction portion performs voice recognition when the reference parameter reaches the predetermined maximum value, and wherein the parameter setting portion decreases the reference parameter until the reference parameter reaches a predetermined minimum value when the recognition rate is greater than a second threshold value, and then the meaning extraction portion performs voice recognition when the reference parameter reaches the predetermined minimum value. |
20110077939 | 12626548 | 0 | 1. A model-based distortion compensating noise reduction apparatus for speech recognition, the apparatus comprising: a speech absence probability calculator for calculating the probability distribution for absence and existence of a speech by using the sound absence and existence information for frames; a noise estimation updater for estimating a more accurate noise component by updating the variance of the clean speech and noise for each frame; a speech absence probability-based noise filter for outputting a first clean speech through the speech absence probability transmitted from the speech absence probability calculator and a first noise filter; a post probability calculator for calculating post probabilities for mixtures using a Gaussian mixture model (GMM) containing a clean speech in the first clean speech; and a final filter designer for forming a second noise filter and outputting an improved final clean speech signal using the second noise filter. |
20150170643 | 14109669 | 0 | 1. An apparatus comprising: a processor; a memory that stores code executable by the processor, the code comprising: a phoneme module that selects recognition phonemes from a phoneme input stream; a user recognition module that selects a user profile for a user recognized based on the recognition phonemes; and a command module that processes a command concurrently identified from the phoneme input stream based on the user profile. |
9870519 | 14794487 | 1 | 1. A method for hierarchical sparse dictionary learning (“HiSDL”) to construct a learned dictionary regularized by an a priori over-complete dictionary, comprising: providing at least one a priori over-complete dictionary for regularization; performing sparse coding of the at least one a priori over-complete dictionary to provide a sparse coded dictionary; using a processor, updating the sparse coded dictionary with regularization using auxiliary variables to provide a learned dictionary; determining whether the learned dictionary converges to an input data set; and outputting the learned dictionary regularized by the at least one a priori over-complete dictionary when the learned dictionary converges to the input data set. |
20120280974 | 13099387 | 0 | 1. A computer-implemented method for generating photo-realistic facial animation with speech, comprising: generating in a computer storage medium a statistical model of audiovisual data over time, based on acoustic feature vectors and visual feature vectors from audiovisual data of facial features during a set of utterances; generating using a computer processor a visual feature vector sequence using the statistical model corresponding to an input set of acoustic feature vectors for speech with which the facial animation is to be synchronized; creating using a computer processor a photorealistic image sample sequence from an image library using the generated visual feature vector sequence; and applying the photorealistic image sample sequence to a three dimensional model of a head to provide the photo-realistic facial animation synchronized with the speech. |
20160078868 | 14953377 | 0 | 1. A processing system comprising: at least one element including at least one of (a) one or more processors or (b) hardware logic/electrical circuitry; activation logic, implemented using the at least one element, configured to determine whether natural language functionality of the processing system is activated, the natural language functionality for enabling the processing system to interpret natural language requests; suggestion logic, implemented using the at least one element, configured to generate one or more intent frames in response to a determination that the natural language functionality of the processing system is activated, each of the one or more intent frames including at least one carrier phrase and at least one slot; an interface configured to provide the one or more intent frames for perception by a user; assignment logic, implemented using the at least one element, configured to assign a plurality of probabilities to a plurality of respective possible intent frames, each probability indicating a likelihood that the user is to select the corresponding possible intent frame if the corresponding possible intent frame is suggested to the user; and identification logic, implemented using the at least one element, configured to identify a high-probability intent frame from the plurality of possible intent frames, the high-probability intent frame being assigned a probability that is not less than a probability that is assigned to each other possible intent frame in the plurality of possible intent frames, the suggestion logic configured to include the high-probability intent frame in the one or more intent frames based on the high-probability intent frame being assigned a probability that is not less than a probability that is assigned to each other possible intent frame in the plurality of possible intent frames. |
7552046 | 10988721 | 1 | 1. A computer-implemented method for determining whether to apply a given paraphrase alternation pattern to an input string, the method comprising: generating a context model based on a data set from which the given paraphrase alternation pattern was derived, the data set being database of news articles; utilizing a computer processor that is a functional component of the computer to apply the context model to determine whether the given paraphrase alternation pattern can be applied to the input string so as to preserve meaning; if it is determined that the given paraphrase alternation pattern can be applied so as to preserve meaning, then applying the given paraphrase alternation pattern to the input string, wherein the given paraphrase alternation pattern indicates a pattern of transformation from a first set of words to a second set of words, and wherein applying the given paraphrase alternation pattern comprises transitioning the input string from the first set of words to the second set of words, and wherein applying the given paraphrase alternation pattern further comprises applying the given paraphrase alternation pattern to the input string in a plurality of different ways to produce a plurality of different textual variations, and then applying the language model to the plurality of different textual variations to determine a probable sequence of words. |
20140023271 | 13797433 | 0 | 1. A method to identify regions, the method comprising: receiving an image of a scene of real world; creating a plurality of sets of positions automatically, by at least performing comparisons using multiple pluralities of pixels hereinafter compared pixels that are located in the image at corresponding positions comprised in the plurality of sets of positions; wherein a first set in the plurality of sets of positions is created without using in any comparison, a plurality of pixels hereinafter skipped pixels that are located in the image at additional positions comprised in the first set; wherein a first region identified by the first set is contiguous in the image, the first region comprising the compared pixels and the skipped pixels identified respectively by the corresponding positions and the additional positions; wherein a second region is contiguous in the image, the second region being identified by positions in a second set, in the plurality of sets of positions created by the creating; checking automatically, whether a test is satisfied by a first attribute of the first region and a second attribute of the second region; preparing automatically, a merged set comprising the positions in the first set and the positions in the second set, based on at least an outcome of said test; and storing automatically, in one or more memories, the merged set; wherein the receiving, the creating, the checking, the preparing and the storing are performed by one or more processors coupled to the one or more memories. |
7765201 | 11373991 | 1 | 1. A apparatus for making a search for a document in accordance with a query of a natural language, comprising: a first interface unit configured to input a user specified first question sentence represented in a natural language, the first question sentence including a term representing a query; a morphological analysis unit configured to execute morphological analysis for the first question sentence input by the first interface unit, thereby to divide the question sentence on a word by word basis; a question analysis unit configured to analyze the first question sentence, thereby to generate a plurality of second question sentences from the first question sentence, wherein the question analysis unit includes: a first module configured to specify, based on a morphological analysis result by the morphological analysis unit, a first noun that corresponds to a subjective case of the first question sentence and is included in the first question sentence; a second module configured to extract one second noun or a plurality of second nouns, which are included in the first question sentence, other than the first noun from the first question sentence based on the morphological analysis result; and a third module configured to connect the first noun, each of the plurality of second nouns, and the term representing the query included in the first question sentence for said each of the plurality of second nouns when the plurality of second nouns are extracted, thereby to generate the plurality of second question sentences, each including the first noun, at least one of the plurality of second nouns, and the term representing the query, for individually querying each of a plurality of matters, the first noun also corresponding to the subjective case of each of the plurality of second question sentences; a search engine configured to make searches for documents which match respective matters queried by the plurality of second question sentences from a morphological index database by index searches according to the plurality of second question sentences generated by the question analysis unit, the morphological index database storing morphological analysis results for a plurality of documents as indexes; a storage unit which stores tuning information, the tuning information including a concatenation condition for specifying which of documents acquired by the document searches by the search engine and abstracts of the acquired documents are used as search results for the plurality of second question sentences; a concatenation unit configured to select the documents acquired by the document searches by the search engine or the abstracts of the acquired documents as the search results for the plurality of second question sentences to be concatenated with each other in accordance with the tuning information stored in the storage unit, thereby to generate a search result document which represents a search result for the first question sentence by concatenating search results for the plurality of second question sentences by the search engine; and a second interface unit configured to provide a user with the search result document generated by the concatenation unit as a search result for the first question sentence. |
9437213 | 13589954 | 1 | 1. A method of discriminating relative to a voice signal, the method comprising: receiving, via one or more audible sensors, an audible signal including a target voice signal; converting the audible signal into a corresponding plurality of wideband time-frequency units, wherein the time dimension of each time-frequency unit includes at least one of a plurality of sequential intervals, and wherein the frequency dimension of each time-frequency unit includes at least one of a plurality of wide sub-bands; calculating one or more characterizing metrics from the plurality of wideband time-frequency units; calculating a gain function from one or more characterizing metrics calculated from the plurality of wideband time-frequency units; converting the audible signal into a corresponding plurality of narrowband time-frequency units; applying the gain function, calculated from the plurality of wideband time-frequency units, to the plurality of narrowband time-frequency units to produce a corresponding plurality of narrowband gain-corrected time-frequency units; converting the plurality of narrowband gain-corrected time-frequency units into a corrected audible signal, wherein the corrected audible signal includes an improved target voice signal relative to the received audible signal; and outputting the corrected audible signal through an output device. |
20030117365 | 10017067 | 0 | 1. An electronic device with a UI, wherein: the UI provides first user-selectable options, and second user selectable options available upon selection of a specific one of the first options; an information resolution of the first options when rendered differs from the information resolution of the second options when rendered; and a first modality of user interaction with the UI for selecting from the first options differs from a second modality of user interaction with the UI for selecting from the second options. |
20150037775 | 14446980 | 0 | 1. A surgical training simulator system, comprising: a housing structure; an optical module affixed to the housing structure and including an optical camera positioned to capture images in light received from a workspace that includes a field-of-view (FOV) of the of the optical camera; a lighting system structured to illuminate said workspace; and a projector configured to form an image, of a display of the projector, in said workspace in light received from said display of the projector; and tangible, non-transitory computer-readable storage medium having computer-readable program code thereon, the computer-readable program code including program code for generating, with electronic circuitry of the surgical training simulator system and for each motion from a set of motions that have been tabulated for a surgical procedure performed with an instrument within the workspace, an event output representing an occurrence of re-alignment of the instrument when data, acquired with the electronic circuitry, indicate that a change in operational status of the instrument has crossed a predetermined operational threshold; and program code for creating a multi-level hierarchy of descriptors representing changes in the operational status of the instrument by determining identifiable portions of the motion based on combination of multiple event outputs. |
20150279352 | 14433263 | 0 | 1. A mobile device adapted for automatic speech recognition (ASR) comprising: a speech input for receiving an unknown speech input signal from a user; a local controller for: a. determining if a remote ASR processing condition is met, b. transforming the speech input signal into a selected one of a plurality of different speech representation types, and c. sending the transformed speech input signal to a remote server for remote ASR processing; a local ASR arrangement for performing local ASR processing of the speech input including processing any speech recognition results received from the remote server. |
9984377 | 11624631 | 1 | 1. A method of converting visual content information in an image into audio content and providing the audio content for presentation, the method comprising: transmitting first content by a server system via a network to a first end-user device to facilitate a user interface presented by an application of the first end-user device to prompt input comprising an image; processing a first transmission received from the first end-user device via a network by the server system, the first transmission (a) comprising an image, the image comprising visual content information and provided as input via the user interface, and (b) corresponding to a request from an advertiser; responsive to the request from the advertiser, generating, by the server system, audio content based at least in part on the image included in the visual content information at least in part by: analyzing the visual content information to extract meaningful words from the visual content information; identifying one or more limits associated with the audio content; generating a summarized text for the audio content (a) based on the meaningful words extracted from the visual content information, (b) using natural language processing to link the meaningful words based on a heuristic model and a database of words commonly used in similar content, and (c) based at least in part on the one or more limits associated with the audio content, wherein the linking of the meaningful words based on the heuristic model and the database of words commonly used in similar content generates grammatically correct text; and converting the summarized text into speech to generate the audio content and storing the audio content in an audio file; transmitting by the server system via the network the audio file storing the audio content for presentation via a second end-user device; processing by the server system a second transmission received consequent to a selection of the audio content, via the second end user device, that initiates a real-time communication connection for a call from the second end user device to the advertiser upon selection of the audio content; determining by the server system whether the call from the second end user device has been connected to the advertiser via the audio content; and responsive to determining that the call from the second end user device has been connected to the advertiser via the audio content, generating a record entry in a connection record data structure indicating that the call from the second end user device was connected to the advertiser via the audio content. |
9368118 | 13777432 | 1 | 1. A voice analyzer comprising: a voice information acquiring unit that acquires information about a voice acquired by a first voice acquiring unit and a second voice acquiring unit, the first voice acquiring unit being worn by a first wearer and the second voice acquiring unit being worn by a second wearer, and the first voice acquiring unit having a first microphone and a second microphone, the first microphone being positioned closer to a mouth of the first wearer than the second microphone; and a distance calculation unit that calculates a distance between the first wearer and the second wearer based on: (a) speaker identification information, which is information for determining whether the voice is spoken by the first wearer, the second wearer or another person, wherein when a sound pressure ratio of the first microphone to the second microphone is greater than a threshold value, the voice is determined to be a voice of the first wearer, and when the sound pressure ratio is less than the threshold value, the voice is determined to be a voice of the second wearer or another person, and (b) a phase difference between sound waves with a plurality of frequencies included in the voice acquired by the first voice acquiring unit and the second voice acquiring unit, the sound waves including a first sound wave included in the voice of the first wearer and a second sound wave included in the voice of the second wearer, the distance calculation unit calculating the distance when a voice of the second wearer is substantially synchronized with an on and off timing of a voice of the first wearer. |
7769143 | 11929458 | 1 | 1. A method for digital signal manipulation, comprising: receiving an acoustic analog signal at a user system; converting the analog signal to a digital signal; canceling noise from the digital signal to form a processed digital signal; detecting user speech in the processed digital signal by evaluating change in amplitude sign of the processed digital signal; detecting vehicle information associated with the user speech; and if user speech is detected in the processed digital signal, packaging the user speech into speech packets to form a packaged voice signal; selecting a transmission format compatible with the packaged voice signal; and transmitting the packaged voice signal and vehicle information to a server. |
9326088 | 13628875 | 1 | 1. A mobile voice platform for providing a user speech interface to computer-based services using a mobile device, wherein the mobile device includes a processor, communication circuitry that provides access to the computer-based services, an operating system, and one or more applications that are run using the operating system and that utilize one or more of the computer-based services via the communication circuitry, the mobile voice platform comprising: at least one non-transient digital storage medium storing a program module having computer instructions that, upon execution by the processor, receives speech recognition results representing user speech that has been processed using automated speech recognition, determines a desired computer-based service based on the speech recognition results, converts vocabulary within the speech recognition results into a smaller vocabulary that is supported by two or more different service interfaces, accesses a service interface associated with the desired service, initiates the desired service using the service interface, receives a service result from the desired service, and provides a text-based service response for conversion to a speech response to be provided to the user. |
20030158736 | 10229266 | 0 | 1. A voice-enabled user interface comprising: user interface elements; and a speech recognition engine that receives voice input identifying a target user interface element, wherein the voice-enabled user interface resolves ambiguities in associating the received voice input with the target user interface element using representational enumerated labels. |
8060229 | 12635968 | 1 | 1. A graphical user interface in a portable electronic device for configuring the portable electronic device for a workout undertaken by a user through a display, said graphical user interface comprising: a workout type interface configured to receive an input of the user through the display to select at least a workout type for the workout; a workout characteristics interface configured to receive an input of the user through the display to select at least one workout characteristic for the workout of the selected workout type; an accessory sensor status interface configured to receive an input of the user through the display to select an accessory sensor with which to pair the portable electronic device and from which to collect data during the workout and communicate the collected data to the portable electronic device to monitor the workout of the user; and a calibration interface configured to receive an input of the user through the display to fine tune the paired accessory sensor to the workout of the selected workout type with the user of the portable electronic device, the fine tuning providing calibration data to the portable electronic device to determine progress of the user with respect to the workout in combination with the data collected from the accessory sensor during the workout. |
20020165715 | 10020895 | 0 | 1. A speech recognition system, comprising: means for determining the length of a speech portion to be recognised; means for defining a subset of speech portions from a set of stored speech portions in dependence on the determined length; and recognition means for recognising the speech portion from the subset of speech portions. |
20120219126 | 13460701 | 0 | 1. A system for processing call records, comprising: a call processor to process a call between a caller and a live agent, comprising: a speech recognition engine to receive a stream of verbal speech utterances from the caller and to convert the stream of verbal speech utterances into text; and a text-to-speech engine to receive text messages from the agent in response to the stream of verbal speech and to convert the text messages into synthesized speech utterances; a telephony interface to provide the synthesized speech utterances to the caller; a call record processor to process a record of the call comprising the verbal speech utterances and the transcribed speech utterances; a display to present the processed record to a further live agent for manipulation; and a database to store the manipulated record. |
20030037301 | 10197355 | 0 | 1. A computerized method comprising: identifying one or more valid content descriptions that can be used as a source of a new description; applying a set of formal logic rules to the one or more valid content descriptions; and constructing the new content description conforming to the set of formal logic rules from the one or more valid content descriptions. |
20140232656 | 13771187 | 0 | 1. A method for operating an electronic device having a display and a capacitive physical keyboard, comprising: controlling operation of the device in a first context in which a first input operation of the capacitive physical keyboard reflects selection of keys on the capacitive physical keyboard; responsive to receipt of information reflecting a potential context change, enabling control of the device to switch to operation in a second context that is different from the first context; responsive to an input, controlling operation in the second context in which a second input operation of the capacitive physical keyboard reflects selection of keys on the capacitive physical keyboard, wherein the second input operation is different from the first input operation; and returning control to operation in the first context. |
6038527 | 08809080 | 1 | 1. A method of generating descriptors for natural language texts, using a plurality of training texts having a plurality of words, comprising the steps of: extracting words from a text during a training phase on the basis of the training texts; predetermining a minimum structure of said descriptors; breaking down words in the text into shorter word segments, wherein each shorter word segment within a longer word segment must meet said minimum structure for said breaking down to be permitted; and matching said word segments that remain in the text against each other to generate a list of descriptors. |
20080086681 | 11534597 | 0 | 1. A method of providing a relayed language interpretation service to permit a caller to communicate with a third party, comprising: receiving a language interpretation call from a caller at a language interpretation provider; determining a caller language corresponding to a language spoken of the caller; engaging a first interpreter that speaks the caller language and a base language; receiving an indication from the caller that the caller needs interpretation between the caller language and a third-party language; permitting the first interpreter to engage a second interpreter that speaks the base language and the third-party language; engaging the second interpreter to the language interpretation call; engaging the third-party to the language interpretation call; and permitting over-the-phone interpretation of the caller language and the third-party language. |
20040085368 | 10697845 | 0 | 1. A computer implemented method of providing visual feedback to a computer user during manipulation of selected text on a display device of a computer system, the computer system including a cursor control device for interactively positioning a cursor and an insertion caret on the display device, the computer also having a signal generation device for signaling an active state and an inactive state, the method comprising the computer implemented steps of: a) in response to an active state of the signal generation device while the cursor is over the selected text at a source location on said display device: 1) creating and displaying a text object of the selected text; 2) de-emphasizing the selected text at the source location; b) in a finite series of steps moving the text object on the display device along a line between the source location and the cursor until the text object reaches the cursor; c) displaying the insertion caret near the cursor; d) in response to an inactive state of the signal generation device while the cursor is over a destination location: 1) on the display device zooming from a first bounding rectangle for the selected text at the source location to a second bounding rectangle for the selected block of text at the destination location; and 2) displaying on screen the selected text at the destination location. |
20070211168 | 11412189 | 0 | 1. A method of setting languages in a television receiver, the method comprising: setting a language in a first menu according to a user selection, the first menu corresponding to one function of a plurality of language-specific functions; and automatically setting a language of at least one other function of the plurality of language-specific functions to the language set according to the user selection. |
9123347 | 13598112 | 1 | 1. A noise eliminating apparatus comprising: a speech section detecting unit configured to detect a speech section from a noise speech signal including a noise signal; a speech section separating unit configured to separate the speech section into a consonant section and a vowel section on the basis of a Vowel Onset Point (VOP) in the speech section; a filter transfer function calculating unit configured to calculate a transfer function of a filter for eliminating the noise signal in order to allow the degree of noise elimination in the consonant section and the vowel section to be different, wherein the filter transfer function calculating unit comprises an initial transfer function calculating unit and a final transfer function calculating unit, wherein the initial transfer function calculating unit is configured to calculate an initial transfer function by estimating the priori SNR at a current signal frame when calculating the initial transfer function by using the current signal frame extracted from a noise speech signal, and wherein the final transfer function calculating unit is configured to calculate a final transfer function as a transfer function of the filter by updating a previously-calculated transfer function in consideration of a critical value according to whether a corresponding signal frame corresponds to which one of the consonant section, the vowel section, and a non-speech section, when calculating the final transfer function by using at least one signal frame after the current signal frame; and a noise eliminating unit configured to eliminate the noise signal from the noise speech signal on the basis of the transfer function. |
4525076 | 06190356 | 1 | 1. A vocal announcing device for an electronic timepiece wherein background sounds are overlappingly performed with vocal announcement during vocal announcement in a timepiece with vocal announcing device which announces indicated time or performs alarming action by voice signals, said device comprising: a voice memory section which memorizes voices as digital signals; a voice signal generating circuit which at least includes a first address counter that reads out in a fixed order a content memorized in said voice memory section and said circuit further includes a converting section which converts digital signals supplied by said voice memory section into analog signals; a background sound memory section which memorizes background sounds as digital signals; a background sound signal generating circuit including second address counter which reads out in a fixed order a content memorized in said background sound memory section and said circuit further includes a converting section which converts digital signals supplied by said background sound memory section into analog signal; and action controlling section which makes said first and second address counters operate overlappingly at an optional interval or at an indicated time. |
8070601 | 12540227 | 1 | 1. A system for providing simultaneous context based audio interaction among a plurality of participants in a network based gaming environment, the system comprising: a single centralized game server separate from and in communication with a plurality of audio communication devices associated with a plurality of game participants, the game server configured to host the network based gaming environment, to generate game state profiles for each game participant and to maintain the game state profile for each game participant comprising game specific context for that game participant; an audio conference server separate from and in communication with the game server, the audio conference server configured to host voice over internet protocol based audio conferences between two or more game participants; a plurality of geographically distributed audio mixers, each audio mixer in communication with the audio conference server and one of the plurality of audio communication devices and separate from the audio communication devices, audio conference server and game server; and a plurality of simultaneous and independent voice over internet protocol based audio conferences within the network based game environment, all of the audio conferences contained with a single instance of a dynamic network based game in the network based game environment and each audio conference comprising the audio communication devices associated with a distinct group of participants, each group of participants comprising a plurality of participants having a shared game context within the game state profiles comprising parameters or attributes that permit audio communication among the game participants and each audio conference comprising audio paths between the plurality of geographically distributed audio mixers and the audio communication devices associated with each game participant in that group; wherein the single centralized game server is configured to initiate and to control the plurality of simultaneous and independent voice over internet protocol based audio conferences within the single instance of the network based game based on the generated and maintained game state profiles and to switch the audio communication devices among the plurality of audio conferences seamlessly and dynamically during the single network based game instance and non-disruptively to the single network based game instance and any of the audio conferences and wherein the audio conference server is configured to establish each audio conference solely in response to instructions from the game server. |
20060156247 | 11026349 | 0 | 1. A device that is capable of creating a user interface (UI) having multiple UI elements; the device adapted to establish respective hot zones for respective ones of the multiple UI elements; wherein the device is further adapted to present at least one floating action button when a focus targets a hot zone. |
9384736 | 13590699 | 1 | 1. A computer-implemented method for managing speech recognition response, the computer-implemented method comprising: receiving a spoken utterance at a client electronic device, the client electronic device having a local automated speech recognizer; analyzing the spoken utterance using the local automated speech recognizer; transmitting at least a portion of the spoken utterance over a communication network to a remote automated speech recognizer that analyzes spoken utterances and returns remote speech recognition results; prior to receiving a remote speech recognition result from the remote automated speech recognizer, initiating a response via a user interface of the client electronic device, the response corresponding to the spoken utterance, wherein at least an initial portion of the response is based on a local speech recognition result from the local automated speech recognizer, wherein the local speech recognition result is classified into one of multiple reliability classes based on a confidence value representing accuracy of the local speech recognition result, and wherein the initiated response is selected based on the reliability class assigned to the local speech recognition result; and modifying the response after the response has been initiated and prior to completing delivery of the response via the user interface such that modifications to the response are delivered via the user interface as a portion of the response, the modifications being based on the remote speech recognition result. |
8781829 | 13527347 | 1 | 1. A computer-implemented method performed by at least one computer processor, the method comprising: (A) applying automatic speech recognition to an audio signal to produce a structured document representing contents of the audio signal; (B) determining whether the structured document includes an indication of compliance for each of a plurality of best practices to produce a conclusion; (C) inserting content into the structured document, based on the conclusion, to produce a modified structured document; (D) generating a first indication that a user should provide additional input of a first type to conform the structured document to a first best practice in the plurality of best practices; and (E) generating a second indication that the user should provide additional input of a type to conform the structured document to a second best practice in the plurality of best practices. |
20070162924 | 11326818 | 0 | 1. A method for classifying a video, comprising the steps of: defining a set of classes for classifying an audio signal of a video; combining selected classes of the set as a subset of important classes, the subset of important classes being important for a specific highlighting task; combining the remaining classes of the set as a subset of other classes; training jointly the subset of important classes and the subset of other classes with training audio data to form a task specific classifier; and classifying the audio signal using the task specific classifier as either important or other to identify highlights in the video corresponding to the specific highlighting task. |
6081775 | 09334391 | 1 | 1. A method in a computer system for, in a representation of one or more dictionaries comprising a plurality of text segments, characterizing the sense of an occurrence of a polysemous word, the method comprising the steps of: selecting a plurality of dictionary text segments each containing a first word; identifying among the selected dictionary text segments a first occurrence of a second word, the first occurrence of the second word having no word sense characterization; identifying among the selected dictionary text segments a second occurrence of the second word, the second occurrence of the second word having a word sense characterization; and attributing to the first occurrence of the second word the word sense characterization of the second occurrence of the second word. |
20030165800 | 10319254 | 0 | 1. A method of improving a student's score on a language-based test, e.g. a math test or the like, testing the student's proficiency in a subject such as math using a language based test directed to the subject, teaching said subject by having the student play a spatial-temporal software game configured to teach the subject using animated characters and obtaining a score that assesses the student's proficiency in the concepts embodied in the spatial-temporal game, testing the student's proficiency in said subject using a second language-based test directed to the subject taught by the spatial-temporal software, comparing the game scores with the student's score from the second language-based test to determine if the test score reflects a proficiency lower than what is reflected by the student's game scores, using the comparison of game scores to language-based test scores to provide language and vocabulary instruction to said student concerning the language and vocabulary terms used in the language-based test, said instruction including using flashcards and one or more stories in which the language and vocabulary terms of said language-based test are associated with one or more of the animated characters used in the spatial-temporal software game so that the student will relate vocabulary terms with the concepts embodied in the spatial-temporal software game, and re-testing the student using a language-based test to evaluate the student's performance. |
20100055654 | 12550188 | 0 | 1. A learning apparatus comprising: feature extracting means for extracting a feature at a feature point in a plurality of training images including a training image that contains a target object to be recognized and a training image that does not contain the target object; tentative learner generating means for generating a tentative learner for detecting the target object in an image, the tentative learner being formed from a plurality of weak learners through statistical learning using the training images and the feature obtained from the training images; and learner generating means for generating a final learner that is formed from a plurality of weak learners and that detects the target object in an image by substituting the feature into a feature function formed from at least one of the weak learners that form the tentative learner so as to obtain a new feature and performing statistical learning using the new feature and the training images. |
20100086108 | 12246056 | 0 | 1. A method implemented in a computer infrastructure having computer executable code tangibly embodied on a computer readable storage medium having programming instructions operable to: receive an audio stream of a communication between a plurality of participants; filter the audio stream of the communication into separate audio streams, one for each of the plurality of participants, wherein each of the separate audio streams contains portions of the communication attributable to a respective participant of the plurality of participants; and output the separate audio streams to a storage system. |
8543913 | 12252418 | 1 | 1. A method for accessing textual widgets, comprising: invoking a spell-checker to check a spelling of a string expression, wherein the string expression includes one of a predefined prefix, a predefined suffix or a predefined formula; marking the string expression as misspelled based upon the predefined prefix, the predefined suffix or the predefined formula; executing one of a plurality of textual widgets for performing a non-spellchecking function, wherein the executed textual widget is associated with the string expression via the predefined prefix, the predefined suffix or the predefined formula, and wherein each different textual widget performs a different function/operation upon the marked string expression; returning at least one result of the non-spellchecking function; and displaying the at least one result. |
20160004707 | 14733188 | 0 | 1. A method for providing natural language query translation, the method comprising: training a statistical model according to a plurality of query click log data; receiving a natural language query; translating the natural language query into a search query according to the statistical model; performing the search query; and providing at least one result associated with performing the search query. |
20150120288 | 14066079 | 0 | 1. A method comprising: receiving speech from a user at a device communicating with a remote speech recognition system, wherein the device comprises an embedded speech recognition system that accesses private user data; and recognizing a part of the speech with the embedded speech recognition system by accessing the private user data, wherein the private user data is not available to the remote speech recognition system. |
8111174 | 12245575 | 1 | 1. An apparatus for identifying running vehicles using acoustic signatures, comprising: an input sensor configured to capture an acoustic waveform produced by a vehicle source in an area to be monitored and convert the waveform into a digitized electrical signal; and a processing system configured to divide the digitized electrical signal into a plurality of frames; compute at least one spectral feature vector for each frame; integrate said spectral feature vectors over the plurality of frames to produce a spectro-temporal representation of said acoustic waveform, and apply values obtained from said spectro-temporal representation as inputs to a learning function to determine an acoustic signature of the vehicle source. |
4581757 | 06295198 | 1 | 1. A speech synthesizer comprising: means for receiving an input from an external control device; first memory means for permanently storing a first plurality of coded speech data; second memory means coupled to said receiving means for temporarily storing a second plurality of coded speech data, said second plurality of coded speech data being provided by said external control device; speech synthesizer processor means for converting coded speech data into digital speech signals representative of human speech; selector means for selectively activating one of said first and second memory means to apply selected coded speech data to said speed synthesizer processor means from either one of said first and second pluralities of coded speech data in response to a control signal provided by said external control device designating which of said first and second memory means is active; and digital-to-analog converter means operably associated with said speed synthesizer processor means for converting said digital speech signals into analog signals representative of human speech. |
9734138 | 15257084 | 1 | 1. A computer implemented method of tagging utterances with Named Entity Recognition (“NER”) labels using an unmanaged crowd, the method being implemented in an end user device having one or more physical processors programmed with computer program instructions that, when executed by the one or more physical processors, cause the end user device to perform the method, the method comprising: obtaining, by the computer system, a plurality of utterances relating to a domain, the domain being associated with a plurality of entities, each entity relating to a category of information in the domain; generating, by the computer system, a first annotation job configured to request that at least a first portion of the utterance be assigned to one of a first set of entities, from among the plurality of entities, wherein a number of the first set of entities does not exceed a maximum number such that cognitive load imposed on a user to whom the first annotation job is provided is controlled; generating, by the computer system, a second annotation job configured to request that at least a second portion of the utterance be assigned to one of a second set of entities, from among the plurality of entities, wherein: a number of the second set of entities does not exceed the maximum number such that cognitive load imposed on a user to whom the second annotation job is provided is controlled, the first portion and the second portion are the same or different and the first set of entities is different than the second set of entities, and the user to whom the first annotation job is provided is the same or different from the user to whom the second annotation job is provided; causing, by the computer system, the first annotation job and the second annotation job to be deployed to the unmanaged crowd; and receiving, by the computer system, a plurality of annotations provided by the unmanaged crowd, the plurality of annotations comprising a first annotation relating to the first annotation job and a second annotation relating to the second annotation job. |
4592085 | 06469114 | 1 | 1. A method for recognizing particular phonemes in a voice signal having silence-phoneme and phoneme-phoneme transitions, said method comprising the steps of: providing an electrical signal representing said voice signal; producing a first acoustic parameter signal from said electrical signal, said first acoustic parameter signal containing phonemic information of said voice signal; generating a transition signal from the phonemic information in said first acoustic parameter signal indicating the location in said voice signal of a transition; storing said first acoustic parameter signal; and producing a second acoustic parameter signal from said stored first acoustic parameter signal using said transition signal, said second acoustic parameter signal containing phonemic information of said voice signal at said transition, whereby said second acoustic parameter signal can be compared with known phonemic information to recognize the phonemic information in said voice signal. |
20130102295 | 13628875 | 0 | 1. A mobile voice platform for providing a user speech interface to computer-based services using a mobile device having a processor, communication circuitry that provides access to the computer-based services, an operating system, and one or more applications that are run using the operating system and that utilize one or more of the computer-based services via the communication circuitry, the mobile voice platform comprising: at least one non-transient digital storage medium storing a program module having computer instructions that, upon execution by the processor, receives speech recognition results representing user speech that has been processed using automated speech recognition, determines a desired computer-based service based on the speech recognition results, accesses a remotely-stored service interface associated with the desired service, initiates the desired service using the service interface, receives a service result from the desired service, and provides a text-based service response for conversion to a speech response to be provided to the user. |
Subsets and Splits