Entropy encoding scheme |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
申请号 | EP14160512 | 申请日 | 2012-01-12 | 公开(公告)号 | EP2760138B1 | 公开(公告)日 | 2018-03-07 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
申请人 | GE VIDEO COMPRESSION LLC; | 发明人 | MARPE DETLEV; NGUYEN TUNG; SCHWARZ HEIKO; WIEGAND THOMAS; | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
摘要 | Decomposing a value range of the respective syntax elements into a sequence of n partitions with coding the components of z laying within the respective partitions separately with at least one by VCL coding and with at least one by PIPE or entropy coding is used to greatly increase the compression efficiency at a moderate coding overhead since the coding scheme used may be better adapted to the syntax element statistics. Accordingly, in accordance with embodiments, syntax elements are decomposed into a respective number n of source symbols s i with i=1...n, the respective number n of source symbols depending on as to which of a sequence of n partitions (140 1-3 ) into which a value range of the respective syntax elements is sub-divided, a value z of the respective syntax elements falls into, so that a sum of values of the respective number of source symbols s i yields z, and, if n>1; for all i=1...n-1, the value of s i corresponds to a range of the i th partition. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
权利要求 | Entropy encoding apparatus comprising a decomposer (136) configured to convert a sequence (138) of syntax elements having a value range which is sub-divided into a sequence of N partitions (140 1-3) into a sequence (106) of source symbols (106) by individually decomposing at least a subgroup of the syntax elements into a respective number n of source symbols s i with i=1...n, the respective number n of source symbols depending on as to which of the sequence of N partitions (140 1-3) a value z of the respective syntax elements falls into, so that a sum of values of the respective number of source symbols s i yields z, and, if n>1, for all i=1...n-1, the value of s i corresponds to a range of the i th partition; a subdivider (100) configured to subdivide the sequence (106) of source symbols into a first subsequence (108) of source symbols and a second subsequence (110) of source symbols such that all source symbols s x with x being member of a first subset of {1...N} are contained within the first subsequence (108) and all source symbols s y with y being member of a second subset of {1...N} being disjoint to the first subset, are contained within the second subsequence (110); a VLC encoder (102) configured to symbol-wisely encode the source symbols of the first subsequence (108); and an arithmetic encoder (104) configured to encode the second subsequence (110) of source symbols, wherein the number of partitions N and the bound of the partitions are dependent on the actual syntax element. Entropy encoding apparatus according to claim 1, wherein the values z of the subgroup of the syntax elements are absolute values. Entropy encoding apparatus according to claim 1 or 2, wherein the second subset is {1} with the sequence of N partitions being arranged such that a p th partition covers higher values of the value range than a q th partition for all p,q ∈ {1..N} with p>q. Entropy encoding method comprising converting a sequence (138) of syntax elements having a value range which is sub-divided into a sequence of N partitions (140 1-3) into a sequence (106) of source symbols (106) by individually decomposing at least a subgroup of the syntax elements into a respective number n of source symbols s i with i=1...n, the respective number n of source symbols depending on as to which of the sequence of N partitions (140 1-3) a value z of the respective syntax elements falls into, so that a sum of values of the respective number of source symbols s i yields z, and, if n>1, for all i=1...n-1, the value of s i corresponds to a range of the i th partition; subdividing the sequence (106) of source symbols into a first subsequence (108) of source symbols and a second subsequence (110) of source symbols such that all source symbols s x with x being member of a first subset of {1...N} are contained within the first subsequence (108) and all source symbols s y with y being member of a second subset of {1...N} being disjoint to the first subset, are contained within the second subsequence (110); by VLC encoding, symbol-wisely encoding the source symbols of the first subsequence (108); and by arithmetic encoding, encoding the second subsequence (110) of source symbols. wherein the number of partitions N and the bound of the partitions are dependent on the actual syntax element. A computer program having a program code for performing, when running on a computer, a method according to claim 4. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
说明书全文 | [0001] The present invention relates to entropy encoding and may be used in applications such as, for example, video and audio compression. [0002] Entropy coding, in general, can be considered as the most generic form of lossless data compression. Lossless compression aims to represent discrete data with fewer bits than needed for the original data representation but without any loss of information. Discrete data can be given in the form of text, graphics, images, video, audio, speech, facsimile, medical data, meteorological data, financial data, or any other form of digital data. [0003] In entropy coding, the specific high-level characteristics of the underlying discrete data source are often neglected. Consequently, any data source is considered to be given as a sequence of source symbols that takes values in a given m-ary alphabet and that is characterized by a corresponding (discrete) probability distribution { p 1, ..., pm } . In these abstract settings, the lower bound of any entropy coding method in terms of expected codeword length in bits per symbol is given by the entropy [0004] Huffman codes and arithmetic codes are well-known examples of practical codes capable of approximating the entropy limit (in a certain sense). For a fixed probability distribution, Huffman codes are relatively easy to construct. The most attractive property of Huffman codes is that its implementation can be efficiently realized by the use of variable-length code (VLC) tables. However, when dealing with time-varying source statistics, i.e., changing symbol probabilities, the adaptation of the Huffman code and its corresponding VLC tables is quite demanding, both in terms of algorithmic complexity as well as in terms of implementation costs. Also, in the case of having a dominant alphabet value with pk > 0.5, the redundancy of the corresponding Huffman code (without using any alphabet extension such as run length coding) may be quite substantial. Another shortcoming of Huffman codes is given by the fact that in case of dealing with higher-order probability modeling, multiple sets of VLC tables may be required. Arithmetic coding, on the other hand, while being substantially more complex than VLC, offers the advantage of a more consistent and adequate handling when coping with adaptive and higher-order probability modeling as well as with the case of highly skewed probability distributions. Actually, this characteristic basically results from the fact that arithmetic coding provides a mechanism, at least conceptually, to map any given value of probability estimate in a more or less direct way to a portion of the resulting codeword. Being provided with such an interface, arithmetic coding allows for a clean separation between the tasks of probability modeling and probability estimation, on the one hand, and the actual entropy coding, i.e., mapping of a symbols to codewords, on the other hand. [0005] An alternative to arithmetic coding and VLC coding is PIPE coding. To be more precise, in PIPE coding, the unit interval is partitioned into a small set of disjoint probability intervals for pipelining the coding processing along the probability estimates of random symbol variables. According to this partitioning, an input sequence of discrete source symbols with arbitrary alphabet sizes may be mapped to a sequence of alphabet symbols and each of the alphabet symbols is assigned to one particular probability interval which is, in turn, encoded by an especially dedicated entropy encoding process. With each of the intervals being represented by a fixed probability, the probability interval partitioning entropy (PIPE) coding process may be based on the design and application of simple variable-to-variable length codes. The probability modeling can either be fixed or adaptive. However, while PIPE coding is significantly less complex than arithmetic coding, it still has a higher complexity than VLC coding. [0006] Therefore, it would be favorable to have an entropy coding scheme at hand which enables to achieve a better tradeoff between coding complexity on the one hand and compression efficiency on the other hand, even when compared to PIPE coding which already combines advantages of both arithmetic coding and VLC coding. [0007] Further, in general, it would be favorable to have an entropy coding scheme at hand which enables to achieve a better compression efficiency per se, at a moderate coding complexity. [0008] WO 2008/129021 A2 [0009] It is an object of the present invention to provide an entropy coding concept which fulfils the above-identified demand, i.e. enables to achieve a better tradeoff between coding complexity on the one hand and compression efficiency on the other hand. [0010] This object is achieved by the subject matter of the independent claims. [0011] The present invention is based on the idea that decomposing a value range of the respective syntax elements into a sequence of n partitions with coding the components of the syntax element values z laying within the respective partitions separately with at least one by VLC coding and with at least one by arithmetic coding or any other entropy coding method may greatly increase the compression efficiency at a moderate coding overhead in that the coding scheme used may be better adapted to the syntax element statistics. Accordingly, in accordance with embodiments of the present invention, syntax elements are decomposed into a respective number n of source symbols s i with i=1...n, the respective number n of source symbols depending on as to which of a sequence of n partitions (140 1-3) into which a value range of the respective syntax elements is sub-divided, a value z of the respective syntax elements falls into, so that a sum of values of the respective number of source symbols s i yields z, and, if n>1, for all i=1...n-1, the value of s i corresponds to a range of the i th partition [0012] Preferred aspects of the present invention are the subject of the enclosed dependent claims. [0013] Preferred embodiments of the present invention are described below with respect to the figures. These embodiments represent, insofar as they do not use arithmetic coding next to the VLC coding, comparison embodiments. Among the figures,
[0014] Before several embodiments of the present application are described in the following with respect to the figures, it is noted that equal reference signs are used throughout the figures in order to denote equal or equivalent elements in these figures, and the description of these elements presented with any of the previous figures shall also apply to any of the following figures as far as the previous description does not conflict with the description of the current figures. [0015] Fig. 1a [0016] The subdivider 100 is configured to subdivide a sequence of source symbols 106 into a first subsequence 108 of source symbols and a second subsequence 110 of source symbols. The VLC encoder 102 has an input thereof connected to a first output of subdivider 100 and is configured to symbol-wisely convert the source symbols of the first subsequence 108 into codewords forming a first bitstream 112. The VLC encoder 102 may comprise a look-up table und use, individually, the source symbols as an index in order to look-up, per source symbol, a respective codeword in the look-up table. The VLC encoder outputs the latter codeword, and proceeds with the following source symbol in subsequence 110 in order to output a sequence of codewords in which each codeword is associated with exactly one of the source symbols within subsequence 110. The codewords may have different lengths and may be defined such that no codeword forms a prefix with any of the other codewords. Additionally, the look-up table may be static. [0017] The PIPE encoder 104 has an input thereof connected to a second output of subdivider 100 and is configured to encode the second subsequence 110 of source symbols, represented in form of a sequence of alphabet symbols, and comprises an assigner 114 configured to assign a measure for an estimate of a probability distribution among possible values the respective alphabet symbols may assume, to each alphabet symbol of the sequence of alphabet symbols based on information contained within previous alphabet symbols of the sequence of alphabet symbols, a plurality of entropy encoders 116 each of which is configured to convert the alphabet symbols forwarded to the respective entropy encoder into a respective second bitstream 118, and a selector 120 configured to forward each alphabet symbol of the second subsequence 110 to a selected one of the plurality of entropy encoders 116, the selection depending on the afore-mentioned measure for the estimate of the probability distribution assigned to the respective alphabet symbol. The association between source symbols and alphabet symbols may be such that each alphabet symbol is uniquely associated with exactly one source symbol of subsequence 110 in order to represent, along with possibly further alphabet symbols of the sequence of alphabet symbols which may immediately follow each other, this one source symbol. [0018] As described in more detail below, the sequence 106 of source symbols may be a sequence of syntax elements of a parsable bitstream. The parsable bitstream may, for example, represent video and/or audio content in a scalable or non-scalable manner with the syntax elements representing, for example, transform coefficient levels, motion vectors, motion picture reference indices, scale factors, audio envelope energy values or the like. The syntax elements may, in particular, be of different type or category with syntax elements of the same type, for example, having the same meaning within the parsable bitstream but with respect to different portions thereof, such as different pictures, different macroblocks, different spectral components or the like, whereas syntax elements of different type may have a different meaning within the bitstream, such as a motion vector has a different meaning than a syntax element representing a transform coefficient level representing the motion prediction residual. [0019] The subdivider 100 may be configured to perform the subdivision depending on the type of the syntax elements. That is, subdivider 100 may forward syntax elements of a first group of types to the first subsequence 108 and forward syntax elements of a second group of types distinct from the first group, to the second subsequence 110. The subdivision performed by subdivider 100 may be designed such that the symbol statistics of the syntax elements within subsequence 108 is suitable for being VLC encoded by VLC encoder 102, i.e. does result in almost a minimum entropy possible despite the use of VLC encoding and its restriction with regard to its suitability for certain symbol statistics as outlined in the introductory portion of the specification of the present application. On the other hand, the subdivider 100 may forward all other syntax elements onto the second subsequence 110 so that these syntax elements having symbols statistics not being suitable for VLC encoding, are encoded by the more complex, but more efficient - in terms of compression ratio - PIPE encoder 104. [0020] As it is also the case with the more detailed with respect to the following figures, the PIPE encoder 104 may comprise a symbolizer 122 configured to individually map each syntax element of the second subsequence 110 into a respective partial sequence of alphabet symbols, together forming the afore-mentioned sequence 124 of alphabet symbols. In other words, the symbolizer 122 may not be present if, for example, the source symbol of subsequence 110 are already represented as respective partial sequences of alphabet symbols. The symbolizer 122 is, for example, advantageous in case the source symbols within the subsequence 110 are of different alphabets, and especially, alphabets having different numbers of possible alphabet symbols. Namely, in this case, the symbolizer 122 is able to harmonize the alphabets of the symbols arriving within substream 110. The symbolizer 122 may, for example, be embodied as a binarizer configured to binarize the symbols arriving with in subsequence 110. [0021] As mentioned before, the syntax elements may be of different type. This may also be true for the syntax elements within substream 110. The symbolizer 122 may then be configured to perform the individual mapping of the syntax elements of the subsequence 110 using a symbolizing mapping scheme, such as a binarization scheme, different for syntax elements of different type. Examples for specific binarization schemes are presented in the following description, such as a unary binarization scheme, an exp-Golomb binarization scheme of order 0 or order 1, for example, or a truncated unary binarization scheme, a truncated and reordered exp-Golomb order 0 binarization scheme or a non-systematic binarization scheme. [0022] Accordingly, the entropy encoders 116 could be configured to operate on a binary alphabet. Finally, it should be noted that symbolizer 122 may be regarded as being part of the PIPE encoder 104 itself as shown in Fig. 1a [0023] Similar to the latter notice, it should be noted that the assigner 114, although shown to be connected serially between symbolizer 122 and selector 120, may alternatively be regarded as being connected between an output of symbolizer 124 and a first input of selector 120, with an output of assigner 114 being connected to another input of selector 120 as later described with respect to Fig. 3 [0024] As far as the output of the entropy encoding apparatus of Fig. 1a Figs. 22 to 24 Fig. 1 Figs. 5 to 13 Fig. 1a [0025] As has been described above, subdivider 100 may perform the subdivision syntax-element-wise, i.e. the source symbols the subdivider 100 operates on may be whole syntax elements, or alternatively speaking, subdivider 100 may operate in units of syntax elements. [0026] However, the entropy encoding apparatus of Fig. 1a [0027] In particular, decomposer 136 may be configured to convert the sequence 138 of syntax elements into the sequence 106 of source symbols by individually decomposing each syntax element into a respective integer number of source symbols. The integer number may vary among the syntax elements. In particular, some of the syntax elements may even be left unchanged by decomposer 136, whereas other syntax elements are decomposed in exactly two, or at least two, source symbols. The subdivider 100 may be configured to forward one of the source symbols of such decomposed syntax elements to the first subsequence 108 of source symbols and another one of the source symbols of the same decomposed syntax element to the second subsequence 110 of source symbols. As mentioned above, the syntax elements within bitstream 138 may be of different type, and the decomposer 136 may be configured to perform the individual decomposing depending on the type of the syntax element. The decomposer 136 preferably performs the individual decomposing of the syntax elements such that a predetermined unique reverse mapping later on used at the decoding side, from the integer number of source symbols to the respective syntax element, common for all syntax elements exists. [0028] For example, the decomposer 136 may be configured to decompose syntax elements z in parsable bitstream 138, into two source symbols x und y so that z = x + y, z = x - y, z = x · y or z = x : y. By this measure, subdivider 100 may decompose the syntax elements into two components, namely source symbols of source symbol stream 106, one of which is suitable to be VLC encoded in terms of compression efficiency, such as x, and the other one of which is not suitable for VLC encoding and is, therefore, passed on to the second substream 110 rather than the first substream 108, such as y. The decomposition used by decomposer 136 needs not to be bijective. However, as mentioned before, there should exist a reverse mapping enabling a unique retrieval of the syntax elements of the possible decompositions among which decomposer 136 may choose if the decomposition is not bijective. [0029] Up to now, different possibilities have been described for the handling of different syntax elements. As to whether such syntax elements or cases exist, is optional. The further description, however, concentrate on syntax elements which are decomposed by decomposer 136 according to the following principle. [0030] As show in Fig. 1b Fig. 1c Fig. 1c 1-3. As shown in Fig. 1b 1, i.e. the upper limit separating partitions 140 1 and 140 2, then the syntax element is subtracted by the bound limit1 of the first partition 140 1 and z is again checked as to whether same is even greater or equal than the bound 144 of the second partition 140 2, i.e. the upper limit separating partitions 140 2 and 140 3. If z' is greater or equal than the bound 144, then z' is subtracted by the bound limit2 of the second partition 140 2 resulting in z". In the first case where z is smaller than limit1, the syntax element z is sent to subdivider 100 in plain. In case of z being between limit1 and limit2, the syntax element z is sent to subdivider 100 in as a tuple (limit1, z') with z=limit1+z', and in case of z being above limit2, the syntax element z is sent to subdivider 100 in as a triplet (limit1, limit2-limitl, z') with z=limit1+limit2+z'. The first (or sole) component, i.e. z or limitl, forms a first source symbol to be coded by subdivider 100, the second component, i.e. z' or limit2-limit1, forms a second source symbol to be coded by subdivider 100, if present, and the third component, i.e. z", forms a third source symbol to be coded by subdivider 100, if present. Thus, in accordance with Fig. 1b and 1c [0031] In any case, all these different components or resulting source symbols are according to the below embodiments, coded with coding alternatives among. At least one of them is forwarded by subdivider to PIPE coder 104, and at last another one thereof is sent to VLC coder 102. [0032] Particular advantageous embodiments are outlined in more detail below. [0033] After having described above an entropy encoding apparatus, an entropy decoding apparatus is described with respect to Fig. 2a Fig. 2a Fig. 1 Fig. 1a Fig. 4 [0034] As the first subsequence 214 of source symbols and the second subsequence 208 of source symbols commonly form one common sequence 210 of source symbols, the entropy decoding apparatus of Fig. 2a Fig. 1a [0035] In accordance with the description presented above with respect to Fig. 1 [0036] Likewise, the PIPE decoder 202 could comprise a desymbolizer 222 connected between the output of selector 214 and an input of recombiner 220. Similar to the description above with respect to Fig. 1 [0037] Similar to the above discussion of Fig. 1 [0038] For handling the just mentioned syntax elements, the entropy decoding apparatus of Fig. 2a Fig. 1a [0039] As described above, the embodiments described herein below, however, concentrate on syntax elements which are decomposed by decomposer 136 according to Fig. 1b and 1c Fig. 2a [0040] As show in Fig. 2b 1 to s x with x being any of 1 to 3 in the present example. Two or more stages may exist. As shown in Fig. 2b 1 and checks as to whether z is equal to the first limit1. If this is not case, z has been found. Otherwise, composer 224 adds the next source symbol s 2 of source symbol stream 218 to z and again checks as to whether this z equals limit2. If not, z has been found. If not, composer 224 adds the next source symbol s 3 of source symbol stream 218 to z, in order to obtain z in its final form. Generalizations onto a less or more maximum number of source symbols is readily derivable from the above description, and such alternatives will also be described in the folwing. [0041] In any case, all these different components or resulting source symbols are according to the below description, coded with coding alternatives among. At least one of them is forwarded by subdivider to PIPE coder 104, and at last another one thereof is sent to VLC coder 102. [0042] Particular advantageous details are outlined in more detail below. These Details concentrate on favorable possibilities of dividing the value range of the syntax elements and the entropy VLC and PIPE coding schemes which may be used to encode the source symbols. [0043] Further, as has also been described above with respect to Fig. 1 Fig. 2a Fig. 2a Fig. 1 [0044] Thus, Fig. 1a Fig. 2a Fig. 1 Fig. 1a 2 Fig. 1a 2 [0045] Additional measures may be provided in order to cope with situations where certain ones of the entropy encoders 116 are selected such seldom that it takes to long a time to obtain a valid codeword within that very rarely used entropy encoder 116. Examples for such measures are described in more detail below. In particular, the interleaver 128 along with the entropy encoder 116 may, in this case, be configured to flush their alphabet symbols collected so far and codewords having been entered into the afore-mentioned codeword entries, respectively, in a manner so that the time of this flushing procedure may be forecasted or emulated at the decoding side. [0046] At the decoding side, the deinterleaver 230 may act in the reverse sense: whenever, in accordance with the afore-mentioned parsing scheme, the next source symbol to be decoded, is a VLC coded symbol, a current codeword within common bitstream 228 is regarded as a VLC codeword and forwarded within bitstream 206 to VLC decoder 200. On the other hand, whenever any of the alphabet symbols belonging to any of the PIPE encoded symbols of substream 208 is a primary alphabet symbol, i.e. necessitates a new mapping of a codeword of a respective one of the bitstreams 216 to a respective alphabet symbol sequence by the respective entropy decoder 210, the current codeword of common bitstream 228 is regarded as a PIPE encoded codeword and forwarded to the respective entropy decoder 210. The detection of the next codeword border, i.e. the detection of the extension of the next codeword from the end of the codeword just having been forwarded to any of the decoders 200 and 202, respectively, to its end within the inbound interleaved bitstream 228 may be deferred, and be performed under knowledge of, the decoder 200 and 202 being the dedicated recipient of this next codeword in accordance with the above-outlined rule: based on this knowledge, the codebook used by the recipient decoder is known and the respective codeword detectable. If, on the other hand, the codebooks would be designed such that the codeword borders would be detectable without the a-priori knowledge about the recipient decoder among 200 and 202, then the codeword separation could be performed in parallel. In any case, due to the interleaving, the source symbols are available at the decoder in an entropy decoded form, i.e. as source symbols, in their correct order at reasonable delay. [0047] After having described above embodiments for an entropy encoding apparatus and a respective entropy decoding apparatus, next more details for the above-mentioned PIPE encoders and PIPE decoders are described. [0048] A PIPE encoder is illustrated in Fig. 3 Fig. 1a Table 1: Binarization examples for countable infinite sets (or large finite sets).
Table 2: Binarization examples for finite sets.
[0049] Each bin 3 of the sequence of bins created by the binarizer 2 is fed into the parameter assigner 4 in sequential order. The parameter assigner assigns a set of one or more parameters to each bin 3 and outputs the bin with the associated set of parameters 5. The set of parameters is determined in exactly the same way at encoder and decoder. The set of parameters may consist of one or more of the following parameters:
[0050] The parameter assigner 4 may associates each bin 3,5 with a measure for an estimate of the probability for one of the two possible bin values for the current bin. The parameter assigner 4 associates each bin 3,5 with a measure for an estimate of the probability for the less probable or more probable bin value for the current bin and an identifier specifying an estimate for which of the two possible bin values represents the less probable or more probable bin value for the current bin. It should be noted that the probability for the less probable or more probable bin value and the identifier specifying which of the two possible bin values represents the less probable or more probable bin value are equivalent measures for the probability of one of the two possible bin values. [0051] The parameter assigner 4 may associates each bin 3,5 with a measure for an estimate of the probability for one of the two possible bin values for the current bin and one or more further parameters (which may be one or more of the above listed parameters). Further, the parameter assigner 4 may associates each bin 3,5 with a measure for an estimate of the probability for the less probable or more probable bin value for the current bin, an identifier specifying an estimate for which of the two possible bin values represents the less probable or more probable bin value for the current bin, and one or more further parameters (which may be one or more of the above listed parameters). [0052] The parameter assigner 4 may determine one or more of the above mentioned probability measures (measure for an estimate of the probability for one of the two possible bin values for the current bin, measure for an estimate of the probability for the less probable or more probable bin value for the current bin, identifier specifying an estimate for which of the two possible bin values represents the less probable or more probable bin value for the current bin) based on a set of one or more already encoded symbols. The encoded symbols that are used for determining the probability measures can include one or more already encoded symbols of the same symbol category, one or more already encoded symbols of the same symbol category that correspond to data sets (such as blocks or groups of samples) of neighboring spatial and/or temporal locations (in relation to the data set associated with the current source symbol), or one or more already encoded symbols of different symbol categories that correspond to data sets of the same and/or neighboring spatial and/or temporal locations (in relation to the data set associated with the current source symbol). [0053] Each bin with an associated set of parameters 5 that is output of the parameter assigner 4 is fed into a bin buffer selector 6. The bin buffer selector 6 potentially modifies the value of the input bin 5 based on the input bin value and the associated parameters 5 and feds the output bin 7 - with a potentially modified value - into one of two or more bin buffers 8. The bin buffer 8 to which the output bin 7 is sent is determined based on the value of the input bin 5 and/or the value of the associated parameters 5. [0054] Tthe bin buffer selector 6 may not modify the value of the bin, i.e., the output bin 7 has always the same value as the input bin 5. [0055] The bin buffer selector 6 may determine the output bin value 7 based on the input bin value 5 and the associated measure for an estimate of the probability for one of the two possible bin values for the current bin. The output bin value 7 may be set equal to the input bin value 5 if the measure for the probability for one of the two possible bin values for the current bin is less than (or less than or equal to) a particular threshold; if the measure for the probability for one of the two possible bin values for the current bin is greater than or equal to (or greater than) a particular threshold, the output bin value 7 is modified (i.e., it is set to the opposite of the input bin value). The output bin value 7 may be is set equal to the input bin value 5 if the measure for the probability for one of the two possible bin values for the current bin is greater than (or greater than or equal to) a particular threshold; if the measure for the probability for one of the two possible bin values for the current bin is less than or equal to (or less than) a particular threshold, the output bin value 7 is modified (i.e., it is set to the opposite of the input bin value). The value of the threshold may correspond to a value of 0.5 for the estimated probability for both possible bin values. [0056] The bin buffer selector 6 may determine the output bin value 7 based on the input bin value 5 and the associated identifier specifying an estimate for which of the two possible bin values represents the less probable or more probable bin value for the current bin. The output bin value 7 may be set equal to the input bin value 5 if the identifier specifies that the first of the two possible bin values represents the less probable (or more probable) bin value for the current bin, and the output bin value 7 is modified (i.e., it is set to the opposite of the input bin value) if identifier specifies that the second of the two possible bin values represents the less probable (or more probable) bin value for the current bin. [0057] The bin buffer selector 6 may determine the bin buffer 8 to which the output bin 7 is sent based on the associated measure for an estimate of the probability for one of the two possible bin values for the current bin. The set of possible values for the measure for an estimate of the probability for one of the two possible bin values may be finite and the bin buffer selector 6 contain a table that associates exactly one bin buffer 8 with each possible value for the estimate of the probability for one of the two possible bin values, where different values for the measure for an estimate of the probability for one of the two possible bin values can be associated with the same bin buffer 8. Further, the range of possible values for the measure for an estimate of the probability for one of the two possible bin values may be partitioned into a number of intervals, the bin buffer selector 6 determines the interval index for the current measure for an estimate of the probability for one of the two possible bin values, and the bin buffer selector 6 contains a table that associates exactly one bin buffer 8 with each possible value for the interval index, where different values for the interval index can be associated with the same bin buffer 8. Input bins 5 with opposite measures for an estimate of the probability for one of the two possible bin values (opposite measure are those which represent probability estimates P and 1 - P) may be fed into the same bin buffer 8. Further, the association of the measure for an estimate of the probability for one of the two possible bin values for the current bin with a particular bin buffer is adapted over time, e.g. in order to ensure that the created partial bitstreams have similar bit rates. [0058] The bin buffer selector 6 may determine the bin buffer 8 to which the output bin 7 is sent based on the associated measure for an estimate of the probability for the less probable or more probable bin value for the current bin. The set of possible values for the measure for an estimate of the probability for the less probable or more probable bin value may be finite and the bin buffer selector 6 contain a table that associates exactly one bin buffer 8 with each possible value of the estimate of the probability for the less probable or more probable bin value, where different values for the measure for an estimate of the probability for the less probable or more probable bin value can be associated with the same bin buffer 8. Further, the range of possible values for the measure for an estimate of the probability for the less probable or more probable bin value may be partitioned into a number of intervals, the bin buffer selector 6 determines the interval index for the current measure for an estimate of the probability for the less probable or more probable bin value, and the bin buffer selector 6 contains a table that associates exactly one bin buffer 8 with each possible value for the interval index, where different values for the interval index can be associated with the same bin buffer 8. The association of the measure for an estimate of the probability for the less probable or more probable bin value for the current bin with a particular bin buffer may be adapted over time, e.g. in order to ensure that the created partial bitstreams have similar bit rates. [0059] Each of the two or more bin buffers 8 is connected with exactly one bin encoder 10 and each bin encoder is only connected with one bin buffer 8. Each bin encoder 10 reads bins from the associated bin buffer 8 and converts a sequence of bins 9 into a codeword 11, which represents a sequence of bits. The bin buffers 8 represent first-in-first-out buffers; bins that are fed later (in sequential order) into a bin buffer 8 are not encoded before bins that are fed earlier (in sequential order) into the bin buffer. The codewords 11 that are output of a particular bin encoder 10 are written to a particular partial bitstream 12. The overall encoding algorithm converts source symbols 1 into two or more partial bitstreams 12, where the number of partial bitstreams is equal to the number of bin buffers and bin encoders. A bin encoder 10 may convert a variable number of bins 9 into a codeword 11 of a variable number of bits. One advantage of the above- and below-outlined PIPE coding is that the encoding of bins can be done in parallel (e.g. for different groups of probability measures), which reduces the processing time for several implementations. [0060] Another advantage of PIPE coding is that the bin encoding, which is done by the bin encoders 10, can be specifically designed for different sets of parameters 5. In particular, the bin encoding and encoding can be optimized (in terms of coding efficiency and/or complexity) for different groups of estimated probabilities. On the one hand side, this allows a reduction of the encoding/decoding complexity relative to arithmetic coding algorithms with similar coding efficiency. On the other hand side, it allows an improvement of the coding efficiency relative to VLC coding algorithms with similar encoding/decoding complexity. The bin encoders 10 may implement different encoding algorithms (i.e. mapping of bin sequences onto codewords) for different groups of measures for an estimate of the probability for one of the two possible bin values 5 for the current bin. The bin encoders 10 may implement different encoding algorithms for different groups of measures for an estimate of the probability for the less probable or more probable bin value for the current bin. Alternatively, the bin encoders 10 may implement different encoding algorithms for different channel protection codes. The bin encoders 10 may implement different encoding algorithms for different encryption schemes. The bin encoders 10 may implement different encoding algorithms for different combinations of channel protection codes and groups of measures for an estimate of the probability for one of the two possible bin values 5 for the current bin. The bin encoders 10 implement different encoding algorithms for different combinations of channel protection codes and groups of measures for an estimate of the probability for the less probable or more probable bin value 5 for the current bin. The bin encoders 10 may implement different encoding algorithms for different combinations of encryption schemes and groups of measures for an estimate of the probability for one of the two possible bin values 5 for the current bin. The bin encoders 10 may implement different encoding algorithms for different combinations of encryption schemes and groups of measures for an estimate of the probability for the less probable or more probable bin value 5 for the current bin. [0061] The bin encoders 10 - or one or more of the bin encoders - may represent binary arithmetic encoding engines. One or more of the bin encoders may represent a binary arithmetic coding engine, wherein the mapping from the representative LPS/LPB probability p LPS of a given bin buffer to a corresponding code interval width R LPS - i.e. the interval subdivision of the internal state of the binary arithmetic coding engine, which is defined by the current interval width R and the current interval offset L, identifying, for example, the lower bound of the code interval - is realized by using a table lookup. For each table-based binary arithmetic coding engine associated to a given bin buffer, K representative interval width values { Q0 , ..., QK-1 } may be used for representing R LPS with the choice of K and the representative interval width values { Q0 , ..., QK-1 } being dependent on the bin buffer. For a choice of K > 1, arithmetic encoding of a bin may involve the substeps of mapping the current interval width R to a quantization index q with values in{ 0, ..., K- 1} and performing the interval subdivision by accessing the corresponding partial interval width value Qq from a lookup table with using q as an index. For a choice of K = 1, i.e., for the case where only one representative interval width value Q0 is given, this value Q0 may be chosen as a power of two in order to allow decoding of multiple MPS/MPB values entering the corresponding bin buffer within a single renormalization cycle. The resulting codewords of each arithmetic coding engine may be separately transmitted, packetized, or stored, or they may be interleaved for the purpose of transmission or storage as described hereinafter. [0062] That is, a binary arithmetic coding engine 10 could perform the following steps in coding the bins in its bin buffer 8: |