序号 专利名 申请号 申请日 公开(公告)号 公开(公告)日 发明人
21 악기를 연주하기 위한 이동 통신 단말기 및 방법 KR1020060116881 2006-11-24 KR1020080047038A 2008-05-28 박동익; 임한상
A mobile terminal and a method for playing an instrument are provided to allow a user to change an instrument, play the instrument by adjusting octaves, and store the played content. A microphone array(100) includes a plurality of microphones arranged in a row therein. When a sound is inputted via at least one of the microphones of the microphone array, a controller(120) recognizes the musical scale corresponding to the microphone to which the sound has been inputted, and outputs a control signal so that corresponding musical scales can be outputted through each microphone. A melody chip(130) stores sound sources of each instrument, and outputs the instrument sound source having the corresponding musical scale according to the control signal. The microphones of the microphone array receive one of signals allowing a sound and sound pressure measurement.
22 전자악기의 자동반주시 코드변화처리방법 KR1019940013027 1994-06-09 KR100121126B1 1997-12-04 김문기
The code change processing method upon automatic accompaniment of an electronic musical instrument is capable of performing a slur process, if possible, upon code change due to a new code input, but performing a retrigger process or disregard the new code input, if not, thus to prevent the sound which does not really exist from being generated. The code change processing method comprises the steps of: checking the quality of tone used in a current accompaniment, if new code information is inputted, and determining whether the slur process is possible; if possible, performing the slur process and playing an automatic accompaniment; and if not, performing the retrigger process or disregarding the new code information to thereby play the automatic accompaniment.
23 악음신호 발생장치 KR1019880012477 1988-09-27 KR1019940001090B1 1994-02-12 스즈끼히데오; 우사사또시
내용 없음.
24 핸드헬드 장치 상에서 음악을 작곡하기 위한 방법 KR1020077000675 2005-06-29 KR1020070031384A 2007-03-19 츄,석귀
핸드헬드 전자 장치의 키패드 상에서 음 시퀀스가 형성된다. 상기 핸드헬드 장치의 키패드 상의 숫자 키는 1옥타브에서의 대응하는 음표로 직접 매핑된다. 키패드 상의 하나 이상의 숫자 키를 누름으로써, 음표의 시퀀스가 입력되고, 상기 핸드헬드 장치의 다스플레이 스크린 상의 시퀀스의 숫자 표현이 디스플레이된다.
25 디지탈 음정 변환에서의 불연속점의 개선 처리 방법 및 그 장치 KR1019940031038 1994-11-24 KR100192198B1 1999-06-15 요시다아끼라; 사이또히로유끼
음정 변환시에, 작은 불연속점 처리에서는 불연속의 정도를 한층 개선함과 동시에 처리의 간소화를 할 수 있고, 큰 불연속점 처리에서는 중첩시의 상호 상쇄를 방지하여 음질의 향상을 도모할 수 있다. 작은 불연속점 처리에서는 1차의 FIR 필터를 이용하여 현재 입력 중인 데이터와 1 샘플링 주기전의 데이터를 평활 처리하고, 작은 불연속점이 발생하는 기간내에, 1 샘플링 주기전의 데이터를 평활 처리하고, 작은 불연속점이 발생하는 기간내에, 1 샘플링 주기전(키 업시) 도는 동일하게 후(키 다운시)의 데이터에 서서히 접근시킴에 의해 불연속의 정도를 작게 하고, 큰 불연속점 처리에서는 크로스 페이드시의 가중 계수의 변화를 직선적으로 하지 않고, 2개의 직선(절선)으로 하여 위에 철의 형태를 취함으로써 중첩시에 서로 상쇄되는 것이 방지되어 음질이 개선된다.
26 전자음향신호 발생기 및 그것을 이용하여 다중음을 발생하는 방법 KR1019910014932 1991-08-28 KR1019950003555B1 1995-04-14 야마다구니오
내용 없음.
27 음정변환장치 KR1019900004131 1990-03-27 KR1019930011007B1 1993-11-19 오다미끼오
내용 없음.
28 DETERMINING THE CHARACTERISTIC OF A PLAYED CHORD ON A VIRTUAL INSTRUMENT PCT/US2013029466 2013-03-06 WO2013134441A4 2014-03-06 LITTLE ALEXANDER H; MANJARREZ ELI T
A user interface implemented on a touch-sensitive display for a virtual musical instrument comprising a plurality of chord touch regions configured in a predetermined sequence, each chord touch region corresponding to a chord in a musical key and being divided into a plurality of separate touch zones, the plurality of chord touch regions defining a predetermined set of chords, where each of the plurality of separate touch zones in each region is associated with one or more preselected MIDI files stored in a computer-readable medium. Each of the plurality of touch zones is configured to detect one or more of a plurality of touch gesture articulations including at least one of a legato articulation, a pizzicato articulation, or a staccato articulation. The one or more of the plurality of touch gesture articulations determines the preselected MIDI file associated with each of the plurality of separate touch zones.
29 DETERMINING THE CHARACTERISTIC OF A PLAYED CHORD ON A VIRTUAL INSTRUMENT PCT/US2013029466 2013-03-06 WO2013134441A3 2014-01-16 LITTLE ALEXANDER H; MANJARREZ ELI T
A user interface implemented on a touch-sensitive display for a virtual musical instrument comprising a plurality of chord touch regions configured in a predetermined sequence, each chord touch region corresponding to a chord in a musical key and being divided into a plurality of separate touch zones, the plurality of chord touch regions defining a predetermined set of chords, where each of the plurality of separate touch zones in each region is associated with one or more preselected MIDI files stored in a computer-readable medium. Each of the plurality of touch zones is configured to detect one or more of a plurality of touch gesture articulations including at least one of a legato articulation, a pizzicato articulation, or a staccato articulation. The one or more of the plurality of touch gesture articulations determines the preselected MIDI file associated with each of the plurality of separate touch zones.
30 SOUND SEQUENCES WITH TRANSITIONS AND PLAYLISTS PCT/US2008001653 2008-02-07 WO2008097625A3 2008-10-30 RECHSTEINER PAUL; EPPERSON IAN; KESTELOOT LAWRENCE; PEARL ELLIOTT; WATSON STEPHEN; YOUNG BRIAN
A home theater system includes construction, presentation, and com- merce in, songs. Presentation includes at least one of: metadata about songs or sounds, a function capable of transitioning from one song to a next song, and user preferences; and can determine in what manner to transition from one song to a next song. Construction of songs includes either factors above, or at least one of: a function or a user extension capable of selecting a next song, and an element capable of determining whether the song is perceptually random. A user interface is capable of searching playlists and selecting them for presentation, representing each playlist with a substantially unique pictorial representation, distinguishing in presentation between those playlists licensed to the user and those that are not, and capable of substantially immediate purchase of playlist licenses either individually or in bulk and either automatically or with minimal intervention.
31 RHYTHMIC SYNCHRONIZATION OF CROSS FADING FOR MUSICAL AUDIO SECTION REPLACEMENT FOR MULTIMEDIA PLAYBACK PCT/GB2016051862 2016-06-22 WO2016207625A3 2017-02-02 LYSKE JOSEPH MICHAEL WILLIAM
Music context system, audio track structure and method of real-time synchronization of musical content fig. 2 reprsents a system (30) that permits identified musical phrases or themes to be synchronized and linked into changing real-world events (12). The achieved synchronization includes a seamless musical transition - achieved using a timing offset, such as relative advancement of a significant musical "onset", that is inserted to align with a pre-existing, alligned, identified beat, music signature or timebase - between potentially disparate pre-identified audio sections (e.g. musical phrases) having different emotive themes defined by their respective time signatures, intensities, keys, musical rhythms and/or musical phrasing. Synchronisation of cross-fading or cutting or splicing audio sections with beat for rhythmic coherence. The earliest onset is faded first. The system operates to augment an overall sensory experience of a user in the real world by dynamically changing, re-ordering or repeating and then playing audio themes within the context of what is occurring in the surrounding physical environment, e.g. during different phases of a cardio workout in a step class the music rate and intensity increase during sprint periods and decrease during recovery periods. Based on a pre-inserted key, the accompanying music is re-ordered from an original track and selected automatically - optionally in real-time - to accompany detected or identified changing physical events, such as a heart-rate sensed and reported during a cardio-workout session. The system therefore produces and supplies for use, such as immediate play or broadcast, a composite media file (54) that correlates instantaneous or changing real-word events with customized and user-selectable audio components designed to augment an overall sensory experience.
32 SYSTEMS AND METHODS OF NOTE EVENT ADJUSTMENT PCT/US2013029468 2013-03-06 WO2013134443A1 2013-09-12 LITTLE ALEXANDER H
Some embodiments provide a music editing application that enables a user to compose and edit note characteristics, e.g., via a touch-sensitive display. The graphical user interface (GUI) can display a portion of a music track including note events. In response to receiving a user selection of a note event and a user indication for editing a note event, the GUI can display a menu providing a list of characteristics. The characteristics can include an option for associating at least one of several virtual instruments or one of several articulations with the note event. Upon receiving a user input indicating a characteristic, the matrix editor can associate the note event with the characteristic based on the user input. The music editing application can allow the user to edit additional note characteristics (e.g., an instrument, an articulation) because of the extended capacity for data associated with each note event. A. GUI for an audio editing application enables a user to easily and conveniently shift a temporal and/or pitch of a sequence of note events within a musical piece, e.g., via a touch-sensitive display. The GUI displays a set of note events on a matrix grid and a subset of the note events (e.g., selected by the user) on a note events grid that overlaps the matrix grid. The note events grid is moveable with respect to the matrix grid such that the subset of note events is shifted against the remaining note events while the note events within the subset maintain a spatial relationship with respect to each other. Further, the user can shift the note events grid (and the note events therein) to any location within the matrix grid, without unintentionally snapping the note events to a nearest grid location on the matrix grid.
33 参数控制装置 CN200420104947.9 2004-10-29 CN2794087Y 2006-07-05 相曾优; 青木孝光
本实用新型提供一种参数控制装置,其对应于多个参数设置操作器构件,如混频器衰减器,而准备待用的非线性函数,用于自动地改变待设置的参数的当前值,其中,该非线性函数彼此独立。响应于自动设置指令,如场景回调指令,随着基于非线性函数中相应的一个的特性曲线,将该参数的当前值向一给定的目标值逐渐地改变,其中,该参数将经由每一操作器构件被设置。例如,该非线性函数由一启动偏置和一衰减时间限定,该启动偏置用于设置在对该变化的启动的延迟,该衰减时间是在该变化的启动之后,实际使参数变化至该目标值所必需的。由于在执行每一参数的自动设置处理期间中将发生的时间延迟,可以延迟预定类型的事件(如,GPI事件)的处理。
34 SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND COMPUTER PROGRAM US15774062 2016-11-01 US20180357988A1 2018-12-13 HEESOON KIM; MASAHIKO INAMI; KOUTA MINAMIZAWA; YUTA SUGIURA; MIO YAMAMOTO
[Object] To provide a signal processing device capable of aurally-exaggerating movement of an object itself and providing the aurally-exaggerated movement of the object. [Solution] Provided is a signal processing device including a control unit configured to perform a sound signal process on a waveform of a signal generated on a basis of movement of an object, and cause sound corresponding to a signal generated on a basis of the sound signal process to be output within a predetermined period of time. The signal processing device is capable of aurally-exaggerating movement of an object itself and providing the aurally-exaggerated movement of the object by performing a sound signal process on a waveform of a signal generated on the basis of the movement of the object.
35 METHODS AND APPARATUS TO USE PREDICTED ACTIONS IN VIRTUAL REALITY ENVIRONMENTS US15834540 2017-12-07 US20180108334A1 2018-04-19 Manuel Christian Clement; Stefan Welker
Methods and apparatus to use predicted actions in VR environments are disclosed. An example method includes predicting a predicted time of a predicted virtual contact of a virtual reality controller with a virtual object, determining, based on at least one parameter of the predicted virtual contact, a characteristic of a virtual output the object would make in response to the virtual contact, and initiating producing the virtual output before the predicted time of the virtual contact of the controller with the virtual object.
36 SYSTEMS AND METHODS FOR CAPTURING AND INTERPRETING AUDIO US15798829 2017-10-31 US20180068646A1 2018-03-08 Tlacaelel Miguel ESPARZA
A device is provided as part of a system, the device being for capturing vibrations produced by an object such as a musical instrument. Via a fixation element, the device is fixed to a drum. The device has a sensor spaced apart from a surface of the drum, located relative to the drum, and a magnet adjacent the sensor. The fixation element transmits vibrations from its fixation point on the drum to the magnet. Vibrations from the surface of the drum and from the magnet are transmitted to the sensor. A method may further be provided for interpreting an audio input, such as the output of the sensors within the system, the method comprising identifying an audio event or grouping of audio events within audio data, generating a model of the audio event that includes a representation of a timbre characteristic, and comparing that representation to expected representations.
37 Systems and methods for automatic mixing of media US14289438 2014-05-28 US09883284B2 2018-01-30 Par Mikael Bohrarper; Sten Garmark; Niklas Martin Gustavsson; John Fredrik Wilhelm Noren; Gustav Söderström; Babar Zafar
Systems and methods for mixing music are disclosed. Audio mix information is received from a plurality of users. Mix rules are determined from the audio mix information from the plurality of users, wherein the mix rules include a first mix rule associated with a first audio item. The first mix rule relates to an overlap of the first audio item with another audio item. The first mix rule is made available to one or more clients. In some implementations, making the first mix rule available to the one or more clients includes transmitting, to the first client, information enabling the first client to playback a transition between the first audio item and the second audio item in accordance with the first mix rule.
38 SOUND CONTROL DEVICE, SOUND CONTROL METHOD, AND SOUND CONTROL PROGRAM US15709974 2017-09-20 US20180018957A1 2018-01-18 Keizo HAMANO; Yoshitomo OTA; Kazuki KASHIWASE
A sound control device includes: a detection unit that detects a first operation on an operator and a second operation on the operator, the second operation being performed after the first operation; and a control unit that causes output of a second sound to be started, in response to the second operation being detected. The control unit causes output of a first sound to be started before causing the output of the second sound to be started, in response to the first operation being detected.
39 Systems and methods for capturing and interpreting audio US15386840 2016-12-21 US09805703B2 2017-10-31 Tlacaelel Miguel Esparza
A device is provided as part of a system, the device being for capturing vibrations produced by an object such as a musical instrument. Via a fixation element, the device is fixed to a drum. The device has a sensor spaced apart from a surface of the drum, located relative to the drum, and a magnet adjacent the sensor. The fixation element transmits vibrations from its fixation point on the drum to the magnet. Vibrations from the surface of the drum and from the magnet are transmitted to the sensor. A method may further be provided for interpreting an audio input, such as the output of the sensors within the system, the method comprising identifying an audio event or grouping of audio events within audio data, generating a model of the audio event that includes a representation of a timbre characteristic, and comparing that representation to expected representations.
40 Estimation of target character train US14813007 2015-07-29 US09711133B2 2017-07-18 Kazuhiko Yamamoto
A desired character train included in a predefined reference character train, such as lyrics, is set as a target character train, and a user designates a target phoneme train that is indirectly representative of the target character train by use of a limited plurality of kinds of particular phonemes, such as vowels and a particular consonants. A reference phoneme train indirectly representative of the reference character train by use of the particular phonemes is prepared in advance. Based on a comparison between the target phoneme train and the reference phoneme train, a sequence of the particular phonemes in the reference phoneme train that matches the target phoneme train is identified, and a character sequence in the reference character train that corresponds to the identified sequence of the particular phonemes is identified. The thus-identified character sequence estimates the target character train.
QQ群二维码
意见反馈