首页 / 国际专利分类库 / 物理 / 乐器;声学 / 电声乐器 / 特别适用于电子音乐的工具或乐器的输入/输出接口 / .用于电子乐器的用户输入界面(特别适用于电声乐器的图形用户接口入G10H2220/091;通用输入装置入G06F3/00)
子分类:
序号 专利名 申请号 申请日 公开(公告)号 公开(公告)日 发明人
1 节奏检测装置 CN200710140337.2 2007-08-09 CN101123086B 2011-12-21 澄田錬
发明提供节奏检测装置及节奏检测用计算机程序,可无差错地检测出平均拍子间隔和拍子位置。其中,一边演奏拍子检测波形的开头,一边使用敲击检测部(104),由用户敲击拍子位置,在由抖动计算部(107)判定为敲击的抖动为一定范围内时,从由节奏候选检测部(102)检测出的拍子间隔的候选,选择与上述敲击节奏数值上接近的拍子间隔,并且,将稳定时的敲击位置设为拍子检测的开头拍子位置,所以仅通过从用户接收几拍的敲击,便可准确地进行整个曲子的拍子检测。
2 声音收集器、声音信号传送器和音乐演奏系统 CN200810001633.9 2008-01-04 CN101221751A 2008-07-16 上原春喜; 又平健次
音乐台(1)通过通信网络(30)连接到另一音乐台(2),通过不同的通信信道将表示自动弹奏器琴(11)的展示性演奏的音乐数据和表示教师的解说的语音数据从音乐台(1)传送到另一音乐台(2);并且近距通话式麦克(131)和骨传导麦克风(132)被合并到音乐台(1)上的声音收集器(13a)中,并且检查来自骨传导麦克风(132)的振动信号(S3),以查看教师(10)的声带是否振动;当答案给出为肯定时,将语音信号(S1)从近距通话式麦克风(131)传送到传送器模(13b),使得声音收集器(13b)不允许传送器模块(13b)传送表示诸如音调的噪声的语音信号(S1);由此,音乐演奏系统防止学生(20)听到从机中再现的音调。
3 节奏检测装置及节奏检测用计算机程序 CN200710140337.2 2007-08-09 CN101123086A 2008-02-13 澄田錬
发明提供节奏检测装置及节奏检测用计算机程序,可无差错地检测出平均拍子间隔和拍子位置。其中,一边演奏拍子检测波形的开头,一边使用敲击检测部(104),由用户敲击拍子位置,在由抖动计算部(107)判定为敲击的抖动为一定范围内时,从由节奏候选检测部(102)检测出的拍子间隔的候选,选择与上述敲击节奏数值上接近的拍子间隔,并且,将稳定时的敲击位置设为拍子检测的开头拍子位置,所以仅通过从用户接收几拍的敲击,便可准确地进行整个曲子的拍子检测。
4 声音收集器、声音信号传送器和音乐演奏系统 CN200810001633.9 2008-01-04 CN101221751B 2011-05-11 上原春喜; 又平健次
音乐台(1)通过通信网络(30)连接到另一音乐台(2),通过不同的通信信道将表示自动弹奏器琴(11)的展示性演奏的音乐数据和表示教师的解说的语音数据从音乐台(1)传送到另一音乐台(2);并且近距通话式麦克(131)和骨传导麦克风(132)被合并到音乐台(1)上的声音收集器(13a)中,并且检查来自骨传导麦克风(132)的振动信号(S3),以查看教师(10)的声带是否振动;当答案给出为肯定时,将语音信号(S1)从近距通话式麦克风(131)传送到传送器模(13b),使得声音收集器(13b)不允许传送器模块(13b)传送表示诸如音调的噪声的语音信号(S1);由此,音乐演奏系统防止学生(20)听到从机中再现的音调。
5 音響ジェスチャにより制御信号を発生するための入インタフェース JP2013552967 2012-02-10 JP5642296B2 2014-12-17 ヤコブ・アベッセル; ザシャ・グロルミシュ
6 楽音生成方法およびその装置 JP2007504633 2006-01-06 JPWO2006090528A1 2008-08-07 中村 俊介; 俊介 中村
容易に楽音データを生成し、演奏を楽しむことを可能とする。楽音生成装置10は、振動認識手段12と、主制御装置14と、音響装置16と、表示装置18を備える。振動認識手段12は、振動センサであり、人が手を叩いたり、物を叩いたりすることで、振動データを生成する。振動データは、振動データ処理部20で波形を解析し、波形成分を抽出する。波形成分に基づいて楽音データ生成部22で楽音データを生成する。楽音信号によって音響装置16で楽音を発生する。
7 Voice analysis method and device, voice synthesis method and device, and medium storing voice analysis program US14455652 2014-08-08 US09355628B2 2016-05-31 Makoto Tachibana
A voice analysis method includes a variable extraction step of generating a time series of a relative pitch. The relative pitch is a difference between a pitch generated from music track data, which continuously fluctuates on a time axis, and a pitch of a reference voice. The music track data designate respective notes of a music track in time series. The reference voice is a voice obtained by singing the music track. The pitch of the reference voice is processed by an interpolation processing for a voiceless section from which no pitch is detected. The voice analysis method also includes a characteristics analysis step of generating singing characteristics data that define a model for expressing the time series of the relative pitch generated in the variable extraction step.
8 Systems and methods for musical sonification and visualization of data US14607001 2015-01-27 US09190042B2 2015-11-17 Charles R. Plott; Max J. Hirschhorn; Hsingyang Lee
The current disclosure is directed to systems and methods for automatically converting multi-dimensional, complex data sets to musical symbols while representing the converted data in an animated graph. The system groups the data into a number of subsets of data according to a set of user-configured rules and maps the grouped data points to musical notes according to configurable mapping parameters.
9 System and method for using a touchscreen as an interface for music-based gameplay US11745915 2007-05-08 US08961309B2 2015-02-24 Derek T. Dutilly; Jonathan Browne; Nikolaus Paul Ingeneri; Shannon Richard Monroe
An interactive computerized game system including a visual display, one or more user input devices, and a processor executing software that interacts with the display and input device(s) is disclosed. The software displays images of musical targets on the visual display, the musical targets possibly corresponding to a pre-recorded source of music. At least one of the user input devices is a touchscreen arranged to simulate a portion of a musical instrument. During gameplay, the gameplayer must touch the touchscreen at the appropriate time and/or in the appropriate location in response to the displayed images of musical targets. The system provides a positive indication if the player's input matches the attributes of the displayed musical targets.
10 Sound collector, sound signal transmitter and music performance system for remote players US11940708 2007-11-15 US08383925B2 2013-02-26 Haruki Uehara; Kenji Matahira
A music station is connected through a communication network to another music station, and pieces of music data expressing an exhibition performance on a automatic player piano and pieces of voice data expressing tutor's explanation are transmitted from the music station to the other music station through different communication channels; and a close-talking microphone and a bone conduction microphone are incorporated in a sound collector on the music station, and a vibration signal from the bone conduction microphone is examined to see whether or not the cord of tutor vibrates; when the answer is given affirmative, a voice signal from the close-talking microphone is relayed to a transmitter module so that the sound collector does not permit the transmitter module to transmit the voice signal expressing noises such as the tones; whereby the music performance system prevents the trainee from tones reproduced from a headphone.
11 Fluid user interface such as immersive multimediator or iinput/output device with one or more spray jets US12469391 2009-05-20 US08294019B2 2012-10-23 W. Stephen G. Mann
A fluid user interface is presented for applications such as immersive multimedia. In one embodiment, one or more sprays or jets create an immersive multimedia environment in which a participant can interact within the immersive multimedia environment by blocking, partially blocking, diverting, or otherwise engaging with a fluid, to create computational input. When the fluid is air, a keyboard can be implemented on cushions of air coming out of various nozzles or jets. When the fluid is water, the invention may be used in environments such as showers, baths, hot tubs, waterplay areas, gardens, and the like to create a fun, playful, or wet user-interface. In some embodiments, the spraying is computationally controlled, so that the spray creates a tactile user-interface for the control of such devices as new musical instruments. These may be installed in public fountains to result in a fluid user interface to music by playing in the fountains. The invention may also be used in a setting like a karaoke bar, in which participants perform music by playing in a fountain while they sing. Small self contained embodiments of the invention may exist as pool toys, bath toys, or decorative fountains that can sit on desk tops, or the like.
12 FLUID USER INTERFACE SUCH AS IMMERSIVE MULTIMEDIATOR OR IINPUT/OUTPUT DEVICE WITH ONE OR MORE SPRAY JETS US12469391 2009-05-20 US20090223345A1 2009-09-10 W. Stephen G. Mann
A fluid user interface is presented for applications such as immersive multimedia. In one embodiment, one or more sprays or jets create an immersive multimedia environment in which a participant can interact within the immersive multimedia environment by blocking, partially blocking, diverting, or otherwise engaging with a fluid, to create computational input. When the fluid is air, a keyboard can be implemented on cushions of air coming out of various nozzles or jets. When the fluid is water, the invention may be used in environments such as showers, baths, hot tubs, waterplay areas, gardens, and the like to create a fun, playful, or wet user-interface. In some embodiments, the spraying is computationally controlled, so that the spray creates a tactile user-interface for the control of such devices as new musical instruments. These may be installed in public fountains to result in a fluid user interface to music by playing in the fountains. The invention may also be used in a setting like a karaoke bar, in which participants perform music by playing in a fountain while they sing. Small self contained embodiments of the invention may exist as pool toys, bath toys, or decorative fountains that can sit on desk tops, or the like.
13 Behaviorally based environmental system and method for an interactive playground US791873 1997-01-31 US5990880A 1999-11-23 Bradley L. Huffman; Victor H. Lang
A system and method for controlling an interactive playground in which aspects of the playground are dynamically varied based on input signals from sensors in the playground. The system includes a system supervisor unit that utilizes a rule file, a scene file and a MIDI file in conjunction with a variety of sensor input to create an appropriate system response. Output control signals generated by the system supervisor unit are transmitted to other coupled computers to effectuate audio, visual and other effects in an interactive playground environment. The system supervisor has the desirable ability to load different scene, rule and MIDI files to create different system behavior in response to sensor stimuli, thereby creating a more adaptive behavioral based environment.
14 Behavioral based environmental system and method for an interactive playground US348363 1994-11-30 US5740321A 1998-04-14 Bradley L. Huffmann; Victor H. Lang
A behavioral based environment system and method for controlling an interactive playground. The system includes a system supervisor unit that utilizes a rule file, a scene file and a MIDI file in conjunction with a variety of sensor input to create an appropriate system response. Output control signals generated by the system supervisor unit are transmitted to other coupled computers to effectuate audio, visual and other effects in an interactive playground environment. The system supervisor has the desirable ability to load different scene, rule and MIDI files to create different system behavior in response to sensor stimuli, thereby creating a more adaptive behavioral based environment.
15 音声解析装置および音声解析方法 JP2013166311 2013-08-09 JP6171711B2 2017-08-02 橘 誠
16 An input interface for generating the control signal by the acoustic gesture JP2013552967 2012-02-10 JP2014508965A 2014-04-10 ヤコブ・アベッセル; ザシャ・グロルミシュ
トーン信号(110)と、トーン信号出力(34)と、トーン信号出力(110)へ接続され、トーン信号入力に着信するトーン信号を受信し、トーン信号内で、少なくとも1つの条件に一致する1つ又は幾つかのトーン信号パッセージを識別するようにトーン信号を分析するためのサウンド分類器(120)とを備えたトーン入力デバイス(100)。 さらに、トーン入力デバイスは、サウンド分類器(120)へ接続され、少なくとも1つの条件に割り当てられたコマンド信号を発生するためのコマンド信号発生器(130)と、コマンド信号をコマンド処理ユニットへ出力するためのコマンド出力(140)とを備えている。 サウンド分類器(120)は、少なくとも1つの条件が存在すると、トーン信号出力を介するトーン信号の出力を1つ又は幾つかのトーン信号パッセージの持続時間に渡って中断するように構成されている。 関連のトーン発生デバイスは、特に、キャンセルするコマンド信号に至るまで、コマンド信号により決定される処理規則に従って着信するトーン信号から処理ずみトーン信号を発生するためのコマンド処理ユニット(130)を備えている。 また、それぞれの方法及びコンピュータプログラムも開示している。
【選択図】 図1
17 Voice transmission system JP2007002361 2007-01-10 JP4940956B2 2012-05-30 春喜 上原; 健次 又平
18 Tempo detector and computer program for tempo detection JP2006216362 2006-08-09 JP2008040284A 2008-02-21 SUMIDA REN
<P>PROBLEM TO BE SOLVED: To provide a tempo detector capable of detecting average intervals and positions of beats without error. <P>SOLUTION: While the beginning of a beat detection waveform is played, a user is made to tap at beat positions by using a tapping detecting unit 104 and when a fluctuation calculating unit 107 decides that fluctuations of the tapping are within a certain range, a beat interval which is numerically close to the tapping tempo is selected out of candidates for beat intervals detected by a tempo candidate detecting unit 102. Then a tapping position of stable tapping is decided as the starting beat position of the beat detection, so beat detection of the whole music can accurately be performed only by letting the user tap several beats. <P>COPYRIGHT: (C)2008,JPO&INPIT
19 Voice analysis method and device, and medium storing voice analysis program EP14180151.4 2014-08-07 EP2838082B1 2018-07-25 Tachibana, Makoto
A voice analysis method includes a variable extraction step of generating a time series of a relative pitch. The relative pitch is a difference between a pitch generated from music track data, which continuously fluctuates on a time axis, and a pitch of a reference voice. The music track data designate respective notes of a music track in time series. The reference voice is a voice obtained by singing the music track. The pitch of the reference voice is processed by an interpolation processing for a voiceless section from which no pitch is detected. The voice analysis method also includes a characteristics analysis step of generating singing characteristics data that define a model for expressing the time series of the relative pitch generated in the variable extraction step.
20 METHOD FOR PLAYING VIRTUAL MUSICAL INSTRUMENT AND ELECTRONIC DEVICE FOR SUPPORTING THE SAME EP16835383 2016-08-04 EP3335214A4 2018-06-20 LEE JAE HAK; PARK DOO YONG; LEE YOUNG GYUN; LEE YOUNG DAE; SEO EUN JUNG; HONG DONG GUEN; BANG LAE HYUK; LEE EUN YEUNG; LEE CHEONG JAE
An electronic device is provided. The electronic device includes a touch screen display, at least one of a speaker and a sound interface, a processor configured to electrically connect to the touch screen display, the speaker, and the sound interface, and a memory configured to electrically connect to the processor. The memory stores instructions for, when executed, causing the processor to display at least one item comprising a musical instrument shape on the touch screen display, receive a touch input through the touch screen display, load sound data corresponding to the at least one item based on the touch input, process the sound data based at least in part on information associated with the touch input, and output the processed sound data through the speaker or the sound interface.
QQ群二维码
意见反馈