序号 | 专利名 | 申请号 | 申请日 | 公开(公告)号 | 公开(公告)日 | 发明人 |
---|---|---|---|---|---|---|
81 | SYSTEMS AND METHODS OF NOTE EVENT ADJUSTMENT | EP13710741.3 | 2013-03-06 | EP2786370B1 | 2017-04-19 | LITTLE, Alexander, H.; HERMANN, Tobias Manuel; HOMBURG, Clemens |
82 | VOICE ANALYSIS METHOD AND DEVICE, VOICE SYNTHESIS METHOD AND DEVICE AND MEDIUM STORING VOICE ANALYSIS PROGRAM | EP15185625.9 | 2014-08-07 | EP2980786B1 | 2017-03-22 | TACHIBANA, Makoto |
83 | METHOD OF MODELING CHARACTERISTICS OF A NON LINEAR SYSTEM. | EP16176571.4 | 2016-06-28 | EP3121608A3 | 2017-03-01 | WANG, Tien-Ming; YEH, Yi-Fan; SIAO, Yi-Song |
A method of modeling (fig. 4, fig. 5) a characteristic of a non-linear system (21), comprises feeding test input signals (Sig6, Sig7) into the non-linear system (21) to obtain test output signals (Resp6, Resp7), wherein the test input signals (Sig6, Sig7) include a first test input signal (Sig6) and the test output signals (Resp6, Resp7) include a first test output signal (Resp6), identifying occurrences when an output level in at least one specific frequency band of the first testing output signal significantly changes under the first test input signal (e.g. identifying the frequencies at which there are overtones) so as to obtain a first profile (Profile B: frequency index versus input level, fig. 7), and modeling the characteristic (e.g. frequency response of pre-amplifier stage, fig. 8) based on the first profile (fig. 20). |
||||||
84 | Note sequence analysis | EP13176155.3 | 2013-07-11 | EP2688063B1 | 2017-02-22 | Sumi, Kouhei |
85 | VOICE ANALYSIS METHOD AND DEVICE, VOICE SYNTHESIS METHOD AND DEVICE AND MEDIUM STORING VOICE ANALYSIS PROGRAM | EP15185624.2 | 2014-08-07 | EP2983168B1 | 2017-02-01 | TACHIBANA, Makoto |
86 | MUSICAL-PERFORMANCE-INFORMATION TRANSMISSION METHOD AND MUSICAL-PERFORMANCE-INFORMATION TRANSMISSION SYSTEM | EP15735419.2 | 2015-01-06 | EP3093840A1 | 2016-11-16 | MATAHIRA, Kenji; UEHARA, Haruki; MAEZAWA, Akira |
A musical-performance-information transmission method using a first instrument and a second instrument, wherein the first instrument produces sounds in accordance with a user's musical performance and generates musical-performance data in accordance with the produced sounds and the second instrument produces sounds by receiving the musical performance data via a communication means. In the musical-performance-information transmission method, a mixed-sound signal is generated in accordance with a mixture of sounds produced by the first instrument and sounds that are different from the sounds produced by the first instrument, a reference signal is generated in accordance with the sounds produced by the first instrument, and on the basis of the mixed-sound signal and the reference signal, the reference signal is removed from the mixed-sound signal, generating a separated signal, and sound is emitted on the basis of said separated signal. |
||||||
87 | Sound source control information generating apparatus, electronic percussion instrument, and sound source control information generating method | EP14182386.4 | 2014-08-27 | EP2846326B1 | 2016-02-17 | Takasaki, Ryo |
88 | VOICE ANALYSIS METHOD AND DEVICE, VOICE SYNTHESIS METHOD AND DEVICE AND MEDIUM STORING VOICE ANALYSIS PROGRAM | EP15185624.2 | 2014-08-07 | EP2983168A1 | 2016-02-10 | TACHIBANA, Makoto |
A voice analysis method comprises generating a time series of a relative pitch (R), which is a difference between a pitch (PB) generated from music track data (XB) designating respective notes of a music track in time series, and a pitch (PA) of a reference voice. The music track is divided into unit sections (UA) of a predetermined duration, and singing characteristics data (Z) is generated, which includes, for each of a plurality of statuses (St) of a model (M), classification information for classifying the unit sections (UA) into a plurality of sets and variable information defining a probability distribution of the time series of the relative pitch (R) within each of the classified unit sections (UA). The classification information is generated based on a condition relating to an attribute of the note and based on the condition relating to an attribute of the each of the unit sections (UA). |
||||||
89 | VOICE ANALYSIS METHOD AND DEVICE, VOICE SYNTHESIS METHOD AND DEVICE AND MEDIUM STORING VOICE ANALYSIS PROGRAM | EP15185625.9 | 2014-08-07 | EP2980786A1 | 2016-02-03 | TACHIBANA, Makoto |
A voice synthesis method comprises generating a relative pitch transition (CR) based on synthesis-purpose music track data (YB) and singing characteristic data (Z). The singing characteristics data (Z) comprises a first singing characteristics data (Z1) including a first decision tree T1[n] and a second singing characteristics data (Z2) including a second decision tree T2[n]. The first singing characteristics data (Z1) and the second singing characteristics data (Z2) are mixed. The relative pitch transition (CR) is generated corresponding to the synthesis-purpose music track data (YB) and the mixed singing characteristics data based on a model (M). The first decision tree (T1[n]) and the second decision tree (T2[n]) differ in at least one of size, structure, and classification. |
||||||
90 | DETERMINING THE CHARACTERISTIC OF A PLAYED CHORD ON A VIRTUAL INSTRUMENT | EP13710740.5 | 2013-03-06 | EP2786371A2 | 2014-10-08 | LITTLE, Alexander, H.; MANJARREZ, Eli T. |
A user interface implemented on a touch-sensitive display for a virtual musical instrument comprising a plurality of chord touch regions configured in a predetermined sequence, each chord touch region corresponding to a chord in a musical key and being divided into a plurality of separate touch zones, the plurality of chord touch regions defining a predetermined set of chords, where each of the plurality of separate touch zones in each region is associated with one or more preselected MIDI files stored in a computer-readable medium. Each of the plurality of touch zones is configured to detect one or more of a plurality of touch gesture articulations including at least one of a legato articulation, a pizzicato articulation, or a staccato articulation. The one or more of the plurality of touch gesture articulations determines the preselected MIDI file associated with each of the plurality of separate touch zones. | ||||||
91 | SYSTEMS AND METHODS OF NOTE EVENT ADJUSTMENT | EP13710741.3 | 2013-03-06 | EP2786370A1 | 2014-10-08 | LITTLE, Alexander, H.; HERMANN, Tobias Manuel; HOMBURG, Clemens |
Some embodiments provide a music editing application that enables a user to compose and edit note characteristics, e.g., via a touch-sensitive display. The graphical user interface (GUI) can display a portion of a music track including note events. In response to receiving a user selection of a note event and a user indication for editing a note event, the GUI can display a menu providing a list of characteristics. The characteristics can include an option for associating at least one of several virtual instruments or one of several articulations with the note event. Upon receiving a user input indicating a characteristic, the matrix editor can associate the note event with the characteristic based on the user input. The music editing application can allow the user to edit additional note characteristics (e.g., an instrument, an articulation) because of the extended capacity for data associated with each note event. A. GUI for an audio editing application enables a user to easily and conveniently shift a temporal and/or pitch of a sequence of note events within a musical piece, e.g., via a touch-sensitive display. The GUI displays a set of note events on a matrix grid and a subset of the note events (e.g., selected by the user) on a note events grid that overlaps the matrix grid. The note events grid is moveable with respect to the matrix grid such that the subset of note events is shifted against the remaining note events while the note events within the subset maintain a spatial relationship with respect to each other. Further, the user can shift the note events grid (and the note events therein) to any location within the matrix grid, without unintentionally snapping the note events to a nearest grid location on the matrix grid. | ||||||
92 | Sound signal analysis apparatus, sound signal analysis method and sound signal analysis program | EP14157746.0 | 2014-03-05 | EP2779156A1 | 2014-09-17 | Maezawa, Akira |
A sound signal analysis apparatus 10 includes sound signal input means adapted for inputting a sound signal indicative of a musical piece, tempo detection means adapted for detecting a tempo of each of sections of the musical piece by use of the input sound signal, judgment means adapted for judging stability of the tempo, and control means adapted for controlling a certain target in accordance with a result judged by the judgment means. |
||||||
93 | Method for making electronic tones close to acoustic tones, recording system and tone generating system | EP03008783.7 | 2003-04-22 | EP1357538B1 | 2013-09-25 | Koseki, Shinya; Mantani, Rokurota; Tamaki, Takashi; Sugiyama, Nobuo |
94 | Waveform data generating apparatus and waveform data generating program | EP12197636.9 | 2012-12-18 | EP2613311A2 | 2013-07-10 | Akiyama, Hitoshi |
The waveform data generating apparatus has a waveform data generating circuit WP which inputs a digital signal formed of a plurality of bits which form a control signal for controlling an external apparatus, and generates waveform data indicative of a waveform of a control tone which corresponds to the input digital signal, is formed of tones corresponding to respective values of the bits of the input digital signal, and is formed of frequency components included in a certain high frequency band. The waveform data generating circuit WP has a basic waveform data extraction portion WP7 which extracts a part or a whole of the intermediate portion which is situated at an intermediate portion of the waveform data, and corresponds to the intermediate portion of the digital signal whose bit pattern coincides with a certain bit pattern as basic waveform data. |
||||||
95 | A REVERBERATION ESTIMATOR | EP10854981.7 | 2010-07-20 | EP2596496A1 | 2013-05-29 | OJALA, Pasi |
A method comprising determining a reverberation time estimate for an audio signal from a first part of an encoded audio signal representing the audio signal. | ||||||
96 | Tone generation apparatus | EP11159392.7 | 2011-03-23 | EP2369581B1 | 2012-11-21 | Shirahama, Taro |
97 | Tone generation apparatus | EP11159392.7 | 2011-03-23 | EP2369581A1 | 2011-09-28 | Shirahama, Taro |
Waveform data stored in an external memory (125), like a NAND-type flash memory, are transferred from the external memory to an internal waveform memory (126) via a buffer and read out from the waveform memory to thereby reproduce a tone. Transfer instruction is generated each time data readout from the waveform memory progresses by one page, and the transfer instruction is registered into a transfer queue (521). Thus, in response to the transfer instruction from the queue, the waveform data stored in the external memory are read out on page by page and stored into the waveform memory. The external memory stores therein the waveform data in pages with error correction code attached per page. As the waveform data of a page are transferred from the external memory to the waveform memory, an error is detected using the error correction code, and, if the error is correctable, the error is corrected. If the error is uncorrectable, volume of a tone being generated is rapidly attenuated, or a warning is issued. |
||||||
98 | SYNTHESE ET SPATIALISATION SONORES CONJOINTES | EP07731685.9 | 2007-03-01 | EP1994526A1 | 2008-11-26 | PALLONE, Grégory; EMERIT, Marc; VIRETTE, David |
The invention concerns a process for joint synthesis and spatialization of multiple sound sources in associated spatial positions, including: a) a step of assigning to each source at least one parameter (pi) representing an amplitude; b) a step of spatialization consisting in implementing an encoding into a plurality of channels, wherein each amplitude (pi) is duplicated to be multiplied to a specialization gain (gim), each spatialization gain being determined for one encoding channel (pgm) and for a source to be spatialized (Si); c) a step of grouping (R) the parameters multiplied by the gains (Pim), in respective channels (pg1,..., pgM), by applying a sum of said multiplied parameters (pim) on all the sources (Si) for each channel (pgm), and d) a step of parametric synthesis (SYNTH(I ),..., SYNTH(M)) applied to each of the channels (pgm). | ||||||
99 | SOUND SOURCE | EP99973805.7 | 1999-12-06 | EP1091345B1 | 2007-09-19 | KURIHARA, Shigeki |
A sound source, especially one for a portable device, such as a portable phone, which produces a sufficient amount of sound and reproduces sound of musically eloquent expression. Waveform data inputted into a waveform table (TB) is pseudo-rectangular waves having a high spectral density, thereby solving the problem of low energy density and low sound production efficiency. The spectrum includes even harmonics and spectral lines (1, x2, x3, x4) in the range agreeing with the frequency range (HR) where the sound production efficiency is high. | ||||||
100 | SOUND SOURCE | EP99973805 | 1999-12-06 | EP1091345A4 | 2005-05-11 | KURIHARA SHIGEKI |
A sound source, especially one for a portable device, such as a portable phone, which produces a sufficient amount of sound and reproduces sound of musically eloquent expression. Waveform data inputted into a waveform table (TB) is pseudo-rectangular waves having a high spectral density, thereby solving the problem of low energy density and low sound production efficiency. The spectrum includes even harmonics and spectral lines (1, x2, x3, x4) in the range agreeing with the frequency range (HR) where the sound production efficiency is high. |