序号 专利名 申请号 申请日 公开(公告)号 公开(公告)日 发明人
121 VOICE RESPONSIVE FLUID DELIVERY, CONTROLLING AND MONITORING SYSTEM AND METHOD PCT/US2013041902 2013-05-20 WO2013177084A2 2013-11-28 COLEMAN DENNIS R
A system(lO) and methodology of delivery fluids and monitoring their status which is voice actuated. This system has application where a hands-free environment is preferred. Voice commands are given by the user via a Bluetooth® headset(40) and received typically by the user's Smartphone(22). Voice recognition circuitry is programmed to recognize the simple commands and through complementing electronics(55, 56,60,62), and electro- mechanical(58,68) and mechanical(70,72) elements, delivery at corresponding flow rates is accomplished. A further feature allows for respective voice commands to initiate a monitoring function(20) where the status of any particular characteristic of the fluid can be relayed back to the user via the headset.
122 MEDIA TAGGING PCT/US2012059059 2012-10-05 WO2013052867A3 2013-08-01 ROGERS HENK B
Tagging techniques use user interface features such as face icons that can be manipulated by a variety of cursors to tag media items including but not limited to photos. Tagged items can be presented automatically in response to establishing network communications between two devices.
123 SYSTEM AND METHOD FOR CUSTOMIZED AUDIO PROMPTING PCT/US2008050290 2008-01-04 WO2008086216A3 2008-09-12 ZEINSTRA MARK; CHUTORASH RICHARD J; GOLDEN JEFFREY; SKEKLOFF JON M
A method for providing an audible prompt to a user within a vehicle. The method includes retrieving one or more data files from a memory device. The data files define certain characteristics of an audio prompt. The method also includes creating the audio prompt from the data files and outputting the audio prompt as an audio signal.
124 SHARING VOICE APPLICATION PROCESSING VIA MARKUP PCT/EP2006069664 2006-12-13 WO2007071602A2 2007-06-28 NANAVATI AMIT ANIL; RAJPUT NITENDRA
A system is described for processing voice applications comprising a client device (10) having associated data indicative of its computing capability. The system has access to a plurality of scripts specifying tasks to be performed in a voice-based dialog between a user and the system. The scripts are interpretable at a browser level. A server (20) selects an appropriate script for the client device (10) based on the associated data. An interpreter layer processes the selected script to determine a first set of instructions to be performed on the client device (10) and a second set of instructions to be performed on the server (20) for the dialog. Computation is thus shared between the client device and the server based on the computational capability of the client.
125 SYSTEMS TO ENHANCE DATA ENTRY IN MOBILE AND FIXED ENVIRONMENT PCT/US2005019582 2005-06-03 WO2005122401A3 2006-05-26 GHASSABIAN BENJAMIN FIROOZ
An electronic device (1904) includes a first means (1911) for entering characters coupled to the device for generating a second character input data, where the second means (1906) for entering characters includes a system for monitoring a user's voice. A display (1905) displays the character thereon. A processor (not shown) is coupled to the first (1911) and second means (1906) for entering characters configured to receive the first and second character input data such that the characters displayed on the display (1905) corresponds to both the first and second character input data.
126 PROCESS FOR IDENTIFYING AUDIO CONTENT PCT/IB0100982 2001-05-15 WO0188900A3 2002-05-23 LAROCHE JEAN
A fingerprint of an audio signal is generated based on the energy content in frequency subbands. Processing techniques assure a robust identification fingerprint that will be useful for signals altered subsequent to the generation of the fingerprint. The fingerprint is compared to a database to identify the audio signal.
127 VOICE RECOGNITION METHOD AND DEVICE PCT/DE0001056 2000-04-05 WO0101389A3 2001-03-29 KIPP ANDREAS
A voice recognition method wherein a section of a continuous speech flow consisting of spoken words is detected by means of comparison with stored models. In response to the detection of a first key word, said key word is stored, a first voice recognition system is deactivated and a second voice recognition system is activated. In a second detection step, the speech flow is checked by the second speech recognition system for the appearance of a predetermined, second key word or a second key word sequence.
128 METHOD AND APPARATUS FOR ESTABLISHING CONNECTION BETWEEN ELECTRONIC DEVICES PCT/US2015020710 2015-03-16 WO2015142719A3 2016-01-28 KIM KANG; PARK MIN-KYU; CHO YONGWOO; HWANG KYU WOONG; KIM DUCK-HOON
A method, performed in an electronic device, for connecting to a target device is disclosed. The method includes capturing an image including a face of a target person associated with the target device and recognizing an indication of the target person. The indication of the target person may be a pointing object, a speech command, and/or any suitable input command. The face of the target person in the image is detected based on the indication and at least one facial feature of the face in the image is extracted. Based on the at least one facial feature, the electronic device is connected to the target device.
129 DEVICE FOR EXTRACTING INFORMATION FROM A DIALOG PCT/US2013028831 2013-03-04 WO2013134106A3 2013-11-21 WAIBEL ALEXANDER
Computer-implemented systems and methods for extracting information during a human-to-human mono-lingual or multi-lingual dialog between two speakers are disclosed. Information from either the recognized speech (or the translation thereof) by the second speaker and/or the recognized speech by the first speaker (or the translation thereof) is extracted. The extracted information is then entered into an electronic form stored in a data store.
130 SPEECH RECOGNITION APPARATUS FOR AN ELEVATOR, AND METHOD FOR CONTROLLING SAME PCT/KR2011004709 2011-06-28 WO2012002702A3 2012-04-26 YOO JAE HYEOK
The objective of the present invention to provide a speech recognition apparatus for an elevator, which can be installed regardless of the elevator manufacturer, and which improves use convenience for visually-impaired or hearing-impaired passengers. The present invention also relates to a method for controlling the apparatus. The speech recognition apparatus for an elevator according to the present invention comprises: a speech-recognition unit which outputs a signal for displaying the process of recognizing hall calling speech and floor-selecting speech of a passenger and the result of the speech recognition, and outputs a speech recognition command corresponding to the hall calling and floor selection; a button input unit which detects a hall calling signal and a floor selection signal from the passenger; a control unit that outputs an operation control signal in accordance with the speech recognition command outputted from the speed recognition unit and the hall calling signal and the floor selection signal detected from the button input unit; a protocol-converting unit which converts the elevator operation control signal outputted from the control unit into a predetermined communication protocol for an elevator manufacturer in accordance with the elevator manufacturer; a transceiving unit which transmits the operation control signal converted by the protocol-converting unit to the outside; and a display unit which enables the hall calling signal and the floor selection signal to be recognized by the passenger in accordance with the operation control signal outputted from the control unit. The apparatus and the method of the present invention involve installing a hall calling unit and an operating panel in an existing elevator such that the existing elevator can operate as a speech recognition elevator.
131 SPEECH RECOGNITION WITH PARALLEL RECOGNITION TASKS PCT/US2009049604 2009-07-02 WO2010003109A3 2010-03-18 STROPE BRIAN; BEAUFAYS FRANCOISE; SIOHAN OLIVIER
The subject matter of this specification can be embodied in, among other things, a method that includes receiving an audio signal and initiating speech recognition tasks by a plurality of speech recognition systems (SRS's). Each SRS is configured to generate a recognition result specifying possible speech included in the audio signal and a confidence value indicating a confidence in a correctness of the speech result. The method also includes completing a portion of the speech recognition tasks including generating one or more recognition results and one or more confidence values for the one or more recognition results, determining whether the one or more confidence values meets a confidence threshold, aborting a remaining portion of the speech recognition tasks for SRS's that have not completed generating a recognition result, and outputting a final recognition result based on at least one of the generated one or more speech results.
132 DATA SIMILARITY AND IMPORTANCE USING LOCAL AND GLOBAL EVIDENCE SCORES PCT/IL2007000980 2007-08-07 WO2008018064A3 2009-05-07 BOIMAN OREN; IRANI MICHAL
A method includes finding regions of a reference signal which provide at least one of: local evidence scores and a global evidence score. The local evidence scores indicate local similarity of the regions of the reference signal to regions of a query signal and the global evidence score defines the extent of a global similarity of the query signal to the reference signal. A media exploring device is also included which includes an importance encoder and a media explorer. The importance encoder generates importance scores of at least portions of digital media as a function of at least one of local evidence scores and global evidence scores. The media explorer enables exploring through the digital media according to (i) the importance scores, (ii) data associations/links induced by the evidence scores between different portions of the digital media. The device may also include a media player to play the digital media with adaptive speeds as a function of the importance scores. The device may also include a labeling/annotation module which inherits labels/annotations/markings according to the abovementioned data associations.
133 AUDIO DETECTION USING DISTRIBUTED MOBILE COMPUTING PCT/EP2007061990 2007-11-07 WO2008080673A3 2008-09-25 COUPER CHRISTOPHER CLIVE; KATZ NEIL ALAN; MOORE VICTOR
A method of identifying incidents using mobile devices can include receiving a communicat ion from each of a plurality of mobile devices. Each communication can specify information about a detected sound. Spatial and temporal information can be identified from each communication as well as an indication of a sound signature matching the detected sound. The communications can be compared with a policy specifying spatial and temporal requirements relating to the sound signature indicated by the communications. A 10 notification can be selectively sent according to the comparison.
134 SYSTEM AND METHOD FOR SPEECH RECOGNITION-ENABLED AUTOMATED CALL ROUTING PCT/US2005041473 2005-11-14 WO2006062707A3 2006-09-08 BUSHEY ROBERT R; KNOTT BENJAMIN ANTHONY; MARTIN JOHN MILLS; KORTH SARAH
A system and method are disclosed for processing a call by receiving caller input in a speech format and utilizing phonemes to convert the speech input into word strings. The word strings are then converted into at least one object and at least one action. A synonym table is utilized to determine actions and objects. Objects generally represent nouns and adjective-noun combinations while actions generally represent verbs and adverb-verb combinations. The synonym table stores natural language phrases and their relationship with actions and objects. The actions and objects are utilized to determine a routing destination utilizing a routing table. The call is routed based on the routing table.
135 SYSTEMS AND METHODS FOR OFF-BOARD VOICE AUTOMATED VEHICLE NAVIGATION PCT/US2005032552 2005-09-09 WO2006031804A2 2006-03-23 SCHALK THOMAS
A method of providing navigational information to a vehicle operator includes processing destination information spoken by a vehicle operator with an on - board processing system on a vehicle. The processed voice information is transmitted via a wireless link to a remote data center and analyzed with a voice recognition system at the remote data center to recognize components of the destination information spoken by the vehicle operator. The accuracy of the recognition of the components of the destination information is confirmed via interactive voice exchanges between the vehicle operator and the remote data center. A destination is determined from confirmed components of the destination information, route information to the destination is generated at the remove data center, and the route information transmitted to the on - board processing system from the remote data center via the wireless link.
136 APPARATUS AND METHOD FOR PROCESSING SERVICE INTERACTIONS PCT/US2004013946 2004-05-05 WO2004099934A2 2004-11-18 CLORAN MICHAEL ERIC
An interactive voice and data response system then directs input to a voice, text, and web-capable software-based router, which is able to intelligently respond to the input by drawing on a combination of human agents, advanced speech recognition and expert systems, connected to the router via a TCP/IP network. The digitized input is broken down into components so that the customer interaction is managed as a series of small tasks rather than one ongoing conversation. The router manages the interactions and keeps pace with a real-time conversation. The system utilizes both speech recognition and human intelligence for purposes of interpreting customer utterance or customer text. The system may use more than one human agent, or both human agents and speech recognition software, to interpret simultaneously the same component for error-checking and interpretation accuracy.
137 AUDIO DATA RECEIPT/EXPOSURE MEASUREMENT WITH CODE MONITORING AND SIGNATURE EXTRACTION PCT/US0331075 2003-09-26 WO2004030340A2 2004-04-08 NEUHAUSER ALAN R; WHITE THOMAS W
Systems and methods are provided for gathering audience measurement data relating to receipt of and/or exposure to audio data by an audience member. Audio data is monitored to detect a monitoring code. Based on detection of the monitoring code, a signature characterizing the audio data is extracted.
138 METHOD AND APPARATUS FOR CLASSIFYING SOUND SIGNALS PCT/FR0302116 2003-07-08 WO2004006222A3 2004-04-08 HARB HADI; CHEN LIMING
The invention concerns a method for assigning at least one sound class to a sound signal, characterized in that it comprises the following steps: dividing the sound signal into temporal segments having a specific duration; extracting the frequency parameters of the sound signal in each of the temporal segments, by determining a series of values of the frequency spectrum in a frequency range between a minimum frequency and a maximum frequency; assembling the parameters in time windows having a specific duration greater than the duration of the temporal segments; extracting from each time window, characteristic components; and on the basis of the extracted characteristic components and using a classifier, identifying the sound class of the time windows of the sound signal.
139 INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM EP16888071.4 2016-10-24 EP3410433A1 2018-12-05 TAKI, Yuhei; KAWANO, Shinichi; SAWAI, Kunihito; NAKAGAWA, Yusuke; KATO, Ayumi

[Object] It is desirable to provide technology capable of allowing the user who listens to the result of the speech recognition processing to get to know the accuracy of the speech recognition processing.

[Solution] Provided is an information processing device including: an information acquisition unit configured to acquire information related to accuracy of speech recognition processing on sound information based on sound collection; and an output control unit configured to control a speech output mode of a result of the speech recognition processing on a basis of the information related to the accuracy of the speech recognition processing.

140 INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM EP16888059.9 2016-10-14 EP3410432A1 2018-12-05 KAWANO, Shinichi; TOUYAMA, Keisuke; FURUE, Nobuki; SAITO, Keisuke; SATO, Daisuke; MITANI, Ryosuke; ICHIKAWA, Miwa

There is provided an information processing apparatus including: a processing unit configured to perform a summarization process of summarizing content of speech indicated by voice information based on speech of a user on a basis of acquired information indicating a weight related to a summary.

QQ群二维码
意见反馈