Entropy encoding scheme

申请号 EP14160512 申请日 2012-01-12 公开(公告)号 EP2760138B1 公开(公告)日 2018-03-07
申请人 GE VIDEO COMPRESSION LLC; 发明人 MARPE DETLEV; NGUYEN TUNG; SCHWARZ HEIKO; WIEGAND THOMAS;
摘要 Decomposing a value range of the respective syntax elements into a sequence of n partitions with coding the components of z laying within the respective partitions separately with at least one by VCL coding and with at least one by PIPE or entropy coding is used to greatly increase the compression efficiency at a moderate coding overhead since the coding scheme used may be better adapted to the syntax element statistics. Accordingly, in accordance with embodiments, syntax elements are decomposed into a respective number n of source symbols s i with i=1...n, the respective number n of source symbols depending on as to which of a sequence of n partitions (140 1-3 ) into which a value range of the respective syntax elements is sub-divided, a value z of the respective syntax elements falls into, so that a sum of values of the respective number of source symbols s i yields z, and, if n>1; for all i=1...n-1, the value of s i corresponds to a range of the i th partition.
权利要求
  • Entropy encoding apparatus comprising
    a decomposer (136) configured to convert a sequence (138) of syntax elements having a value range which is sub-divided into a sequence of N partitions (140 1-3) into a sequence (106) of source symbols (106) by individually decomposing at least a subgroup of the syntax elements into a respective number n of source symbols s i with i=1...n, the respective number n of source symbols depending on as to which of the sequence of N partitions (140 1-3) a value z of the respective syntax elements falls into, so that a sum of values of the respective number of source symbols s i yields z, and, if n>1, for all i=1...n-1, the value of s i corresponds to a range of the i th partition;
    a subdivider (100) configured to subdivide the sequence (106) of source symbols into a first subsequence (108) of source symbols and a second subsequence (110) of source symbols such that all source symbols s x with x being member of a first subset of {1...N} are contained within the first subsequence (108) and all source symbols s y with y being member of a second subset of {1...N} being disjoint to the first subset, are contained within the second subsequence (110);
    a VLC encoder (102) configured to symbol-wisely encode the source symbols of the first subsequence (108); and
    an arithmetic encoder (104) configured to encode the second subsequence (110) of source symbols,
    wherein the number of partitions N and the bound of the partitions are dependent on the actual syntax element.
  • Entropy encoding apparatus according to claim 1, wherein the values z of the subgroup of the syntax elements are absolute values.
  • Entropy encoding apparatus according to claim 1 or 2, wherein the second subset is {1} with the sequence of N partitions being arranged such that a p th partition covers higher values of the value range than a q th partition for all p,q ∈ {1..N} with p>q.
  • Entropy encoding method comprising
    converting a sequence (138) of syntax elements having a value range which is sub-divided into a sequence of N partitions (140 1-3) into a sequence (106) of source symbols (106) by individually decomposing at least a subgroup of the syntax elements into a respective number n of source symbols s i with i=1...n, the respective number n of source symbols depending on as to which of the sequence of N partitions (140 1-3) a value z of the respective syntax elements falls into, so that a sum of values of the respective number of source symbols s i yields z, and, if n>1, for all i=1...n-1, the value of s i corresponds to a range of the i th partition;
    subdividing the sequence (106) of source symbols into a first subsequence (108) of source symbols and a second subsequence (110) of source symbols such that all source symbols s x with x being member of a first subset of {1...N} are contained within the first subsequence (108) and all source symbols s y with y being member of a second subset of {1...N} being disjoint to the first subset, are contained within the second subsequence (110);
    by VLC encoding, symbol-wisely encoding the source symbols of the first subsequence (108); and
    by arithmetic encoding, encoding the second subsequence (110) of source symbols.
    wherein the number of partitions N and the bound of the partitions are dependent on the actual syntax element.
  • A computer program having a program code for performing, when running on a computer, a method according to claim 4.
  • 说明书全文

  • [0001]


    The present invention relates to entropy encoding and may be used in applications such as, for example, video and audio compression.



  • [0002]


    Entropy coding, in general, can be considered as the most generic form of lossless data compression. Lossless compression aims to represent discrete data with fewer bits than needed for the original data representation but without any loss of information. Discrete data can be given in the form of text, graphics, images, video, audio, speech, facsimile, medical data, meteorological data, financial data, or any other form of digital data.



  • [0003]


    In entropy coding, the specific high-level characteristics of the underlying discrete data source are often neglected. Consequently, any data source is considered to be given as a sequence of source symbols that takes values in a given
    m-ary alphabet and that is characterized by a corresponding (discrete) probability distribution {
    p
    1, ...,
    pm }
    . In these abstract settings, the lower bound of any entropy coding method in terms of expected codeword length in bits per symbol is given by the entropy



    H


    =











    i


    =


    1



    m





    p


    i




    log


    2






    p


    i





    .






  • [0004]


    Huffman codes and arithmetic codes are well-known examples of practical codes capable of approximating the entropy limit (in a certain sense). For a fixed probability distribution, Huffman codes are relatively easy to construct. The most attractive property of Huffman codes is that its implementation can be efficiently realized by the use of variable-length code (VLC) tables. However, when dealing with time-varying source statistics,
    i.e., changing symbol probabilities, the adaptation of the Huffman code and its corresponding VLC tables is quite demanding, both in terms of algorithmic complexity as well as in terms of implementation costs. Also, in the case of having a dominant alphabet value with
    pk > 0.5, the redundancy of the corresponding Huffman code (without using any alphabet extension such as run length coding) may be quite substantial. Another shortcoming of Huffman codes is given by the fact that in case of dealing with higher-order probability modeling, multiple sets of VLC tables may be required. Arithmetic coding, on the other hand, while being substantially more complex than VLC, offers the advantage of a more consistent and adequate handling when coping with adaptive and higher-order probability modeling as well as with the case of highly skewed probability distributions. Actually, this characteristic basically results from the fact that arithmetic coding provides a mechanism, at least conceptually, to map any given value of probability estimate in a more or less direct way to a portion of the resulting codeword. Being provided with such an interface, arithmetic coding allows for a clean separation between the tasks of probability modeling and probability estimation, on the one hand, and the actual entropy coding,
    i.e., mapping of a symbols to codewords, on the other hand.



  • [0005]


    An alternative to arithmetic coding and VLC coding is PIPE coding. To be more precise, in PIPE coding, the unit interval is partitioned into a small set of disjoint probability intervals for pipelining the coding processing along the probability estimates of random symbol variables. According to this partitioning, an input sequence of discrete source symbols with arbitrary alphabet sizes may be mapped to a sequence of alphabet symbols and each of the alphabet symbols is assigned to one particular probability interval which is, in turn, encoded by an especially dedicated entropy encoding process. With each of the intervals being represented by a fixed probability, the probability interval partitioning entropy (PIPE) coding process may be based on the design and application of simple variable-to-variable length codes. The probability modeling can either be fixed or adaptive. However, while PIPE coding is significantly less complex than arithmetic coding, it still has a higher complexity than VLC coding.



  • [0006]


    Therefore, it would be favorable to have an entropy coding scheme at hand which enables to achieve a better tradeoff between coding complexity on the one hand and compression efficiency on the other hand, even when compared to PIPE coding which already combines advantages of both arithmetic coding and VLC coding.



  • [0007]


    Further, in general, it would be favorable to have an entropy coding scheme at hand which enables to achieve a better compression efficiency per se, at a moderate coding complexity.



  • [0008]




    WO 2008/129021 A2
    relates to the scalable compression of time-consistent 3D lattice sequences. Regarding quantizing and entropy encoding, the document describes that prediction errors of the lattice vectors are compressed component by component. In particular, the components are mapped to the amount of integer numbers, i.e. with sign, and a maximum for the amount, i.e. imax, is used to define an interval within the amount of integer numbers for which components falling within this interval are entropy-encoded. The residual amount, i.e. the distance to the closer end of the interval, is encoded by using Golomb codes.



  • [0009]


    It is an object of the present invention to provide an entropy coding concept which fulfils the above-identified demand, i.e. enables to achieve a better tradeoff between coding complexity on the one hand and compression efficiency on the other hand.



  • [0010]


    This object is achieved by the subject matter of the independent claims.



  • [0011]


    The present invention is based on the idea that decomposing a value range of the respective syntax elements into a sequence of n partitions with coding the components of the syntax element values z laying within the respective partitions separately with at least one by VLC coding and with at least one by arithmetic coding or any other entropy coding method may greatly increase the compression efficiency at a moderate coding overhead in that the coding scheme used may be better adapted to the syntax element statistics. Accordingly, in accordance with embodiments of the present invention, syntax elements are decomposed into a respective number n of source symbols s
    i with i=1...n, the respective number n of source symbols depending on as to which of a sequence of n partitions (140
    1-3) into which a value range of the respective syntax elements is sub-divided, a value z of the respective syntax elements falls into, so that a sum of values of the respective number of source symbols s
    i yields z, and, if n>1, for all i=1...n-1, the value of s
    i corresponds to a range of the i
    th partition



  • [0012]


    Preferred aspects of the present invention are the subject of the enclosed dependent claims.



  • [0013]


    Preferred embodiments of the present invention are described below with respect to the figures. These embodiments represent, insofar as they do not use arithmetic coding next to the VLC coding, comparison embodiments. Among the figures,



  • Fig. 1a


    shows a block diagram of an entropy encoding apparatus;


    Fig. 1b


    shows a schematic diagram illustrating a possible decomposition of syntax elements into source symbols;


    Fig. 1c


    shows a flow diagram illustrating a possible mode of operation of decomposer of

    Fig. 1a
    in decomposing syntax elements into source symbols;


    Fig. 2a


    shows a block diagram of an entropy decoding apparatus;


    Fig. 2b


    shows a flow diagram illustrating a possible mode of operation of composer of

    Fig. 2a
    in composing syntax elements from source symbols;


    Fig. 3


    shows a block diagram of a PIPE encoder according to a comparison embodiment which may be used in

    Fig. 1
    ;


    Fig. 4


    shows a block diagram of a PIPE decoder suitable for decoding a bitstream generated by the PIPE encoder of

    Fig. 3
    , according to a comparison embodiment, which may be used in

    Fig. 2
    ;


    Fig. 5


    shows a schematic diagram illustrating a data packet with multiplexed partial bitstreams;


    Fig. 6


    shows a schematic diagram illustrating a data packet with an alternative segmentation using fixed-size segments;


    Fig. 7


    shows a block diagram of a PIPE encoder using partial bitstream interleaving;


    Fig. 8


    shows a schematic illustrating examples for the status of a codeword buffer at the encoder side of

    Fig. 7
    ;


    Fig. 9


    shows a block diagram of a PIPE decoder using partial bitstream interleaving;


    Fig. 10


    shows a block diagram of a PIPE decoder using codeword interleaving using a single set of codewords;


    Fig. 11


    shows a block diagram of a PIPE encoder using interleaving of fixed-length bit sequences;


    Fig. 12


    shows a schematic illustrating examples for the status of a global bit buffer at the encoder side of

    Fig. 11
    ;


    Fig. 13


    shows a block diagram of a PIPE decoder using interleaving of fixed-length bit sequences;


    Fig. 14


    shows a graph for illustrating an optimal probability interval discretization into
    K = 4 intervals assuming a uniform probability distribution in (0, 0.5] ;


    Fig. 15


    shows a schematic diagram illustrating a tree of binary events for an LPB probability of
    p = 0.38 and an associated variable length code obtained by the Huffman algorithm;


    Fig. 16


    shows a graph from which the relative bit rate increase
    ρ(
    p,
    C) for optimal codes
    C given a maximum number of table entries
    Lm may be gathered;


    Fig. 17


    shows a graph illustrating the rate increase for the theorectically optimal probability interval partitioning into
    K = 12 intervals and a real design with V2V codes with a maximum number of
    Lm = 65 table entries;


    Fig. 18


    shows a schematic diagram illustrating an example for conversion of a ternary choice tree into a full binary choice tree;


    Fig. 19


    shows a block diagram of a system comprising an encoder (left part) and decoder (right part);


    Fig. 20


    shows a block diagram of an entropy encoding apparatus;


    Fig. 21


    shows a block diagram of an entropy decoding apparatus;


    Fig. 22


    shows a block diagram of an entropy encoding apparatus;


    Fig. 23


    shows schematic diagram illustrating examples for the status of a global bit buffer at the encoder side of

    Fig. 22
    ;


    Fig. 24


    shows a block diagram of an entropy decoding apparatus.




  • [0014]


    Before several embodiments of the present application are described in the following with respect to the figures, it is noted that equal reference signs are used throughout the figures in order to denote equal or equivalent elements in these figures, and the description of these elements presented with any of the previous figures shall also apply to any of the following figures as far as the previous description does not conflict with the description of the current figures.



  • [0015]



    Fig. 1a
    shows an entropy encoding apparatus. The apparatus comprises a subdivider 100, a VLC encoder 102 and a PIPE encoder 104.



  • [0016]


    The subdivider 100 is configured to subdivide a sequence of source symbols 106 into a first subsequence 108 of source symbols and a second subsequence 110 of source symbols. The VLC encoder 102 has an input thereof connected to a first output of subdivider 100 and is configured to symbol-wisely convert the source symbols of the first subsequence 108 into codewords forming a first bitstream 112. The VLC encoder 102 may comprise a look-up table und use, individually, the source symbols as an index in order to look-up, per source symbol, a respective codeword in the look-up table. The VLC encoder outputs the latter codeword, and proceeds with the following source symbol in subsequence 110 in order to output a sequence of codewords in which each codeword is associated with exactly one of the source symbols within subsequence 110. The codewords may have different lengths and may be defined such that no codeword forms a prefix with any of the other codewords. Additionally, the look-up table may be static.



  • [0017]


    The PIPE encoder 104 has an input thereof connected to a second output of subdivider 100 and is configured to encode the second subsequence 110 of source symbols, represented in form of a sequence of alphabet symbols, and comprises an assigner 114 configured to assign a measure for an estimate of a probability distribution among possible values the respective alphabet symbols may assume, to each alphabet symbol of the sequence of alphabet symbols based on information contained within previous alphabet symbols of the sequence of alphabet symbols, a plurality of entropy encoders 116 each of which is configured to convert the alphabet symbols forwarded to the respective entropy encoder into a respective second bitstream 118, and a selector 120 configured to forward each alphabet symbol of the second subsequence 110 to a selected one of the plurality of entropy encoders 116, the selection depending on the afore-mentioned measure for the estimate of the probability distribution assigned to the respective alphabet symbol. The association between source symbols and alphabet symbols may be such that each alphabet symbol is uniquely associated with exactly one source symbol of subsequence 110 in order to represent, along with possibly further alphabet symbols of the sequence of alphabet symbols which may immediately follow each other, this one source symbol.



  • [0018]


    As described in more detail below, the sequence 106 of source symbols may be a sequence of syntax elements of a parsable bitstream. The parsable bitstream may, for example, represent video and/or audio content in a scalable or non-scalable manner with the syntax elements representing, for example, transform coefficient levels, motion vectors, motion picture reference indices, scale factors, audio envelope energy values or the like. The syntax elements may, in particular, be of different type or category with syntax elements of the same type, for example, having the same meaning within the parsable bitstream but with respect to different portions thereof, such as different pictures, different macroblocks, different spectral components or the like, whereas syntax elements of different type may have a different meaning within the bitstream, such as a motion vector has a different meaning than a syntax element representing a transform coefficient level representing the motion prediction residual.



  • [0019]


    The subdivider 100 may be configured to perform the subdivision depending on the type of the syntax elements. That is, subdivider 100 may forward syntax elements of a first group of types to the first subsequence 108 and forward syntax elements of a second group of types distinct from the first group, to the second subsequence 110. The subdivision performed by subdivider 100 may be designed such that the symbol statistics of the syntax elements within subsequence 108 is suitable for being VLC encoded by VLC encoder 102, i.e. does result in almost a minimum entropy possible despite the use of VLC encoding and its restriction with regard to its suitability for certain symbol statistics as outlined in the introductory portion of the specification of the present application. On the other hand, the subdivider 100 may forward all other syntax elements onto the second subsequence 110 so that these syntax elements having symbols statistics not being suitable for VLC encoding, are encoded by the more complex, but more efficient - in terms of compression ratio - PIPE encoder 104.



  • [0020]


    As it is also the case with the more detailed with respect to the following figures, the PIPE encoder 104 may comprise a symbolizer 122 configured to individually map each syntax element of the second subsequence 110 into a respective partial sequence of alphabet symbols, together forming the afore-mentioned sequence 124 of alphabet symbols. In other words, the symbolizer 122 may not be present if, for example, the source symbol of subsequence 110 are already represented as respective partial sequences of alphabet symbols. The symbolizer 122 is, for example, advantageous in case the source symbols within the subsequence 110 are of different alphabets, and especially, alphabets having different numbers of possible alphabet symbols. Namely, in this case, the symbolizer 122 is able to harmonize the alphabets of the symbols arriving within substream 110. The symbolizer 122 may, for example, be embodied as a binarizer configured to binarize the symbols arriving with in subsequence 110.



  • [0021]


    As mentioned before, the syntax elements may be of different type. This may also be true for the syntax elements within substream 110. The symbolizer 122 may then be configured to perform the individual mapping of the syntax elements of the subsequence 110 using a symbolizing mapping scheme, such as a binarization scheme, different for syntax elements of different type. Examples for specific binarization schemes are presented in the following description, such as a unary binarization scheme, an exp-Golomb binarization scheme of order 0 or order 1, for example, or a truncated unary binarization scheme, a truncated and reordered exp-Golomb order 0 binarization scheme or a non-systematic binarization scheme.



  • [0022]


    Accordingly, the entropy encoders 116 could be configured to operate on a binary alphabet. Finally, it should be noted that symbolizer 122 may be regarded as being part of the PIPE encoder 104 itself as shown in

    Fig. 1a
    . Alternatively, however, the binarizer may be regarded as being external to the PIPE encoder.



  • [0023]


    Similar to the latter notice, it should be noted that the assigner 114, although shown to be connected serially between symbolizer 122 and selector 120, may alternatively be regarded as being connected between an output of symbolizer 124 and a first input of selector 120, with an output of assigner 114 being connected to another input of selector 120 as later described with respect to

    Fig. 3
    . In effect, the assigner 114 accompanies each alphabet symbol with the afore-mentioned measure for an estimation of the probability distribution.



  • [0024]


    As far as the output of the entropy encoding apparatus of

    Fig. 1a
    is concerned, same is composed of the first bitstream 112 output by VLC encoder 102 and the plurality of second bitstreams 118 output by the plurality of entropy encoders 116. As further described below, all these bitstreams may be transmitted in parallel. Alternatively, same may be interleaved into a common bitstream 126 by use of an interleaver 128.

    Figs. 22 to 24
    show examples with such bitstream interleaving. As further shown in

    Fig. 1
    , the PIPE encoder 104 itself may comprise its own interleaver 130 in order to interleave the plurality of second bitstreams 118 into a common PIPE coded bitstream 132. Possibilities for such interleaver 130 are derivable from the description of

    Figs. 5 to 13
    . Bitstream 132 and bitstream 112 may, in a parallel configuration, represent the output of the entropy encoding apparatus of

    Fig. 1a
    . Alternatively, another interleaver 134 may interleave both bitstreams in which case interleaver 130 and 134 would form two stages of one two-stage interleaver 128.



  • [0025]


    As has been described above, subdivider 100 may perform the subdivision syntax-element-wise, i.e. the source symbols the subdivider 100 operates on may be whole syntax elements, or alternatively speaking, subdivider 100 may operate in units of syntax elements.



  • [0026]


    However, the entropy encoding apparatus of

    Fig. 1a
    may comprise decomposer 136 in order to decompose syntax elements within a parsable bitstream 138 individually into one or more of the source symbols of the source symbol sequence 106 entering subdivider 100.



  • [0027]


    In particular, decomposer 136 may be configured to convert the sequence 138 of syntax elements into the sequence 106 of source symbols by individually decomposing each syntax element into a respective integer number of source symbols. The integer number may vary among the syntax elements. In particular, some of the syntax elements may even be left unchanged by decomposer 136, whereas other syntax elements are decomposed in exactly two, or at least two, source symbols. The subdivider 100 may be configured to forward one of the source symbols of such decomposed syntax elements to the first subsequence 108 of source symbols and another one of the source symbols of the same decomposed syntax element to the second subsequence 110 of source symbols. As mentioned above, the syntax elements within bitstream 138 may be of different type, and the decomposer 136 may be configured to perform the individual decomposing depending on the type of the syntax element. The decomposer 136 preferably performs the individual decomposing of the syntax elements such that a predetermined unique reverse mapping later on used at the decoding side, from the integer number of source symbols to the respective syntax element, common for all syntax elements exists.



  • [0028]


    For example, the decomposer 136 may be configured to decompose syntax elements z in parsable bitstream 138, into two source symbols x und y so that z = x + y, z = x - y, z = x · y or z = x : y. By this measure, subdivider 100 may decompose the syntax elements into two components, namely source symbols of source symbol stream 106, one of which is suitable to be VLC encoded in terms of compression efficiency, such as x, and the other one of which is not suitable for VLC encoding and is, therefore, passed on to the second substream 110 rather than the first substream 108, such as y. The decomposition used by decomposer 136 needs not to be bijective. However, as mentioned before, there should exist a reverse mapping enabling a unique retrieval of the syntax elements of the possible decompositions among which decomposer 136 may choose if the decomposition is not bijective.



  • [0029]


    Up to now, different possibilities have been described for the handling of different syntax elements. As to whether such syntax elements or cases exist, is optional. The further description, however, concentrate on syntax elements which are decomposed by decomposer 136 according to the following principle.



  • [0030]


    As show in

    Fig. 1b
    , the decomposer 136 is configured to decompose certain syntax elements z in parsable bitstream 138 in stages. Two or more stages may exist. The stages are for dividing the value range of syntax element z into two or more adjacent subintervals or sub-ranges as shown in

    Fig. 1c
    . The value range of syntax element may have two infinite endpoints, merely one or may have definite endpoints. In

    Fig. 1c
    , the value range of syntax element is exemplarily sub-divided into three partitions 140
    1-3. As shown in

    Fig. 1b
    , if the syntax element is greater or equal than the bound 142 of the first partition 140
    1, i.e. the upper limit separating partitions 140
    1 and 140
    2, then the syntax element is subtracted by the bound limit1 of the first partition 140
    1 and z is again checked as to whether same is even greater or equal than the bound 144 of the second partition 140
    2, i.e. the upper limit separating partitions 140
    2 and 140
    3. If z' is greater or equal than the bound 144, then z' is subtracted by the bound limit2 of the second partition 140
    2 resulting in z". In the first case where z is smaller than limit1, the syntax element z is sent to subdivider 100 in plain. In case of z being between limit1 and limit2, the syntax element z is sent to subdivider 100 in as a tuple (limit1, z') with z=limit1+z', and in case of z being above limit2, the syntax element z is sent to subdivider 100 in as a triplet (limit1, limit2-limitl, z') with z=limit1+limit2+z'. The first (or sole) component, i.e. z or limitl, forms a first source symbol to be coded by subdivider 100, the second component, i.e. z' or limit2-limit1, forms a second source symbol to be coded by subdivider 100, if present, and the third component, i.e. z", forms a third source symbol to be coded by subdivider 100, if present. Thus, in accordance with

    Fig. 1b and 1c
    , the syntax element is mapped to any of 1 to 3 source symbols, but generalizations onto a less or more maximum number of source symbols is readily derivable from the above description, and such alternatives will also be described in the following.



  • [0031]


    In any case, all these different components or resulting source symbols are according to the below embodiments, coded with coding alternatives among. At least one of them is forwarded by subdivider to PIPE coder 104, and at last another one thereof is sent to VLC coder 102.



  • [0032]


    Particular advantageous embodiments are outlined in more detail below.



  • [0033]


    After having described above an entropy encoding apparatus, an entropy decoding apparatus is described with respect to

    Fig. 2a
    . The entropy decoding apparatus of

    Fig. 2a
    comprises a VLC decoder 200 and a PIPE decoder 202. The VLC decoder 200 is configured to code-wisely reconstruct source symbols of a first subsequence 204 from codewords of a first bitstream 206. The first bitstream 206 is equal to bitstream 112 of

    Fig. 1
    , and the same applies to subsequence 204 as far as subsequence 108 of

    Fig. 1a
    is concerned. The PIPE decoder 202 is configured to reconstruct a second subsequence 208 of source symbols, represented in the form of a sequence of alphabet symbols, and comprises a plurality of entropy decoders 210, an assigner 212 and a selector 214. The plurality of entropy decoders 210 are configured to convert a respective one of second bitstreams 216 into alphabet symbols of the sequence of alphabet symbols. The assigner 212 is configured to assign a measure of an estimate of a probability distribution among the possible values the respective alphabet symbols may assume, to each alphabet symbol of the sequence of alphabet symbols representing the second subsequence 208 of source symbols to be reconstructed, based on information contained within previously reconstructed alphabet symbols of the sequence of alphabet symbols. To this end, assigner 212 may be serially connected between an output of selector 214 and an input thereof, while further inputs of selector 214 have outputs of the entropy decoders 210 respectively connected thereto. The selector 214 is configured to retrieve each alphabet symbol of the sequence of alphabet symbols from a selected one of the plurality of entropy decoders 210, the selection depending on the measure assigned to the respective alphabet symbol. In other words, the selector 214 along with the assigner 212 is operative to retrieve the alphabet symbols obtained by entropy decoders 210 in an order among the entropy decoders 210 obtained by surveying information contained within previous alphabet symbols of the sequence of alphabet symbols. In even other words, assigner 212 and selector 214 are able to reconstruct the original order of the alphabet symbols from alphabet symbol to alphabet symbol. Along with forecasting the next alphabet symbol, assigner 212 is able to determine the afore-mentioned measure of the estimate of the probability distribution for the respective alphabet symbol by use of which selector 214 selects among the entropy decoders 210 to retrieve the actual value of this alphabet symbol. To be even more precise, an as will be described in more detail below, the PIPE decoder 202 may be configured to reconstruct the subsequence 208 of source symbols, represented in form of the sequence of alphabet symbols, responsive to alphabet symbol requests sequentially requesting for the alphabet symbols, and the an assigner 212 may be configured to assign to each request for an alphabet symbol of the sequence of alphabet symbols representing the second subsequence (208) of source symbols to be reconstructed, the afore-mentioned measure of an estimate of a probability distribution among the possible values the respective alphabet symbol may assume. Accordingly, the selector 214 may be configured to retrieve, for each request for an alphabet symbol of the sequence of alphabet symbols representing the second subsequence (208) of source symbols to be reconstructed, the respective alphabet symbol of the sequence of alphabet symbols from a selected one of the plurality of entropy decoders 210, the selection depending on the measure assigned to the respective request for the respective alphabet symbol. The concordance between requests at the decoding side on the one hand, and the data flow or encoding at the encoding side on the other hand will be outlined in more detail with respect to

    Fig. 4
    .



  • [0034]


    As the first subsequence 214 of source symbols and the second subsequence 208 of source symbols commonly form one common sequence 210 of source symbols, the entropy decoding apparatus of

    Fig. 2a
    may, optionally, comprise a recombiner 220 configured to recombine the first subsequence 204 and the second subsequence 208 to obtain the common sequence 218 of source symbols. This common sequence 208 of source symbols yields a reconstruction of sequence 106 of

    Fig. 1a
    .



  • [0035]


    In accordance with the description presented above with respect to

    Fig. 1
    , the source symbols of the first and second subsequences 204 and 208 may be syntax elements of a parsable bitstream. In this case, recombiner 220 could be configured to reconstruct this parsable bitstream of the sequence 218 of syntax elements by interleaving the source symbols arriving via first and second subsequences 204 and 208 in an order prescribed by some parsing rule defining an order among the syntax elements. In particular, the syntax elements may be, as described above, of different type and the recombiner 220 may be configured to retrieve or request syntax elements of a first group of types from the VLC decoder 200 via substream 204, and syntax elements of a second type from the PIPE decoder 202 via substream 208. Accordingly, whenever the just-mentioned parsing rule indicates that a syntax element of a type within the first group is the next in line, recombiner 202 inserts an actual source symbol of subsequence 204 into common sequence 218, and from subsequence 208 otherwise.



  • [0036]


    Likewise, the PIPE decoder 202 could comprise a desymbolizer 222 connected between the output of selector 214 and an input of recombiner 220. Similar to the description above with respect to

    Fig. 1
    , desymbolizer 222 could be regarded as being external to the PIPE decoder 202 and could be even arranged behind recombiner 202, i.e. at the output side of recombiner 220, alternatively. The desymbolizer 222 could be configured to remap, in units of partial sequences of alphabet symbols, the sequence of alphabet symbols 224 output by selector 214 into the source symbols, i.e. syntax elements of subsequence 208. Similar to recombiner 220, desymbolizer 222 knows about the construction of possible partial sequences of alphabet symbols. In particular, desymbolizer 222 may analyze recently received alphabet symbols from selector 214 in order to ascertain as to whether these recently received alphabet symbols yield a valid partial sequence of alphabet symbols associated with a respective value of the respective syntax element, or as to whether this is not the case, and which alphabet symbol is missing next. In even other words, the symbolizer 222 knows, at any time, as to whether further alphabet symbols have to be received from selector 214 in order to finish the reception of a respective syntax element or not, and accordingly, to which syntax element a respective one of the alphabet symbols output by selector 214 belongs. To this end, the desymbolizer 222 may use a symbolizing (de)mapping scheme differing for syntax elements of different type. Similarly, assigner 212 knows about the association of a current alphabet symbol to be retrieved from any of the entropy decoders 210 by selector 214, to a respective one of the syntax elements and may set the above-mentioned measure of estimation of a probability distribution of this alphabet symbol accordingly, i.e. depending on the associated syntax element type. Even further, assigner 212 may be differentiate between different alphabet symbols belonging to the same partial sequence of a current alphabet symbol and may set the measure of estimate of probability distribution differently for these alphabet symbols. Details in this regard are described in more detail below. As describe therein, assigner 212 may be configured to assign contexts to the alphabet symbols. The assignment may be dependent on the syntax element type and/or the position within the partial sequence of alphabet symbols of the current syntax element. As soon as assigner 212 has assigned a context to a current alphabet symbol to be retrieved from any of the entropy decoders 210 by selector 214, alphabet symbol may inherently have the measure of estimate of probability distribution associated therewith as each context has its measure of estimate associated therewith. Further, the context - and its associated measure of estimate of probability distribution - may be adapted according to the actual statistics of the alphabet symbols of the respective context having been retrieved from the entropy decoders 210 so far. Details in this regard are presented in more detail below.



  • [0037]


    Similar to the above discussion of

    Fig. 1
    , it may be possible that the correspondence between the afore-mentioned source symbols of subsequences 204 and 208 in syntax elements is not a one-to-one correspondence. Rather, the syntax elements may have been decomposed into an integer number of sources symbols with the number, eventually, varying among the syntax elements, but being, in any case, greater than one at least for one syntax element. As noted above, the following description focuses on the handling of these kind of syntax elements, and syntax elements of other kinds may even not be present.



  • [0038]


    For handling the just mentioned syntax elements, the entropy decoding apparatus of

    Fig. 2a
    may comprise a composer 224 configured to redo the decomposition performed by decomposer 136 of

    Fig. 1a
    . In particular, composer 224 may be configured to compose the sequence 226 of syntax elements from the source symbols of sequence 218 or, if the recombiner 220 is missing, subsequences 204 and 208, by individually composing each syntax element from a respective integer number of source symbols with one of the source symbols of the integer number of source symbols belonging to the first subsequence 204 and another one of the source symbols of the integer number of sources symbols of the same syntax element belonging to the second subsequence 208. By this measure, certain syntax elements may have been decomposed at the encoder side so as to separate components suitable for VLC decoding from a remaining component having to be passed through the PIPE decoding path. Similar to the above discussion, the syntax element may be of different type and the composer 224 may be configured to perform the individual composition depending on the type of the syntax elements. In particular, composer 224 may be configured to obtain the respective syntax elements by logically or mathematically combining the integer number of source symbols of the respective syntax element. For example, composer 224 may be configured, for each syntax element, apply +, -, : or · to first and second source symbols of one syntax element.



  • [0039]


    As described above, the embodiments described herein below, however, concentrate on syntax elements which are decomposed by decomposer 136 according to

    Fig. 1b and 1c
    and the alternatives described regarding thereto.

    Fig. 2a
    shows as to how composer 224 may function to reconstruct these syntax elements from their source symbols 218.



  • [0040]


    As show in

    Fig. 2b
    , the composer 224 is configured to compose such syntax elements z in stages from incoming source symbols s
    1 to s
    x with x being any of 1 to 3 in the present example. Two or more stages may exist. As shown in

    Fig. 2b
    , composer 224 preliminaryly sets z to be the first symbol s
    1 and checks as to whether z is equal to the first limit1. If this is not case, z has been found. Otherwise, composer 224 adds the next source symbol s
    2 of source symbol stream 218 to z and again checks as to whether this z equals limit2. If not, z has been found. If not, composer 224 adds the next source symbol s
    3 of source symbol stream 218 to z, in order to obtain z in its final form. Generalizations onto a less or more maximum number of source symbols is readily derivable from the above description, and such alternatives will also be described in the folwing.



  • [0041]


    In any case, all these different components or resulting source symbols are according to the below description, coded with coding alternatives among. At least one of them is forwarded by subdivider to PIPE coder 104, and at last another one thereof is sent to VLC coder 102.



  • [0042]


    Particular advantageous details are outlined in more detail below. These Details concentrate on favorable possibilities of dividing the value range of the syntax elements and the entropy VLC and PIPE coding schemes which may be used to encode the source symbols.



  • [0043]


    Further, as has also been described above with respect to

    Fig. 1
    , the entropy decoding apparatus of

    Fig. 2a
    may be configured to receive the first bitstream 206 as well as the plurality of second bitstreams 216 separately or in an interleaved form by way of an interleaved bitstream 228. In the latter case, entropy decoding apparatus of

    Fig. 2a
    may comprise a deinterleaver 230 configured to deinterleave the interleaved bitstream 228 to obtain the first bitstream 206 on the one hand and the plurality of second bitstreams 216 on the other hand. Similar to the above discussion of

    Fig. 1
    , the deinterleaver 230 may be subdivided into two stages, namely a deinterleaver 232 for deinterleaving the interleaved bitstream 228 into two parts, namely bitstream 206 on the one hand and an interleaved form 234 of the second bitstream 216 on the other hand, and a deinterleaver 236 for deinterleaving the latter bitstream 234 to obtain the individual bitstreams 216.



  • [0044]


    Thus,

    Fig. 1a
    and

    Fig. 2a
    showed an entropy encoding apparatus on the one hand and an entropy decoding apparatus suitable for decoding the encoding result obtained by the entropy encoding apparatus of

    Fig. 1
    , on the other hand. Details regarding many of the elements shown in

    Fig. 1a
    and

    2
    are described in more detail with regard to the further figures. Accordingly, reference is made to these details in the following description and these details shall be regarded as also applying to

    Fig. 1a
    and

    2
    individually, as far as these details are separately implementable in the above-described encoders and decoders. Merely with respect to the interleavers and deinterleavers 132 and 234, some additional notice is made here. In particular, interleaving of the bitstreams 112 and 118 may be favorable in case the bitstreams have to be multiplexed into one channel in order to be transmitted. In this case, it may be favorable to interleave the VLC bitstream 112 on the one hand and the PIPE encoding bitstreams 118 on the other hand so as to obey certain conditions to be met such as obeying some maximum decoding delay. In even other words, it may be necessary that the relative time displacement between the times the syntax elements and source symbols, respectively, are retrievable at the decoding side on the one hand and the relative displacement in time in accordance with their position in the parsable bitstream on the other hand, does not exceed a certain maximum delay. Many alternatives for solving this problem are described below. One of these possibilities involves the entropy encoders 116 to be of a variable length coder type configured to map alphabet symbol sequences to codewords, and the entropy decoders 210 to do the reverse mapping. The codewords of VLC bitstream 112 and PIPE bitstreams 118 may be, but do not have to be, selected such that no codeword of any of these bitstreams is prefix of any codeword of any of the other bitstreams, so that the codeword borders remain uniquely determinable at the decoder side. In any case, the interleaver 128 may be configured to reserve and buffer a sequence of codeword entries for the codeword within the first bitstream 112 and the second bitstream 118 in a sequential order depending on an order in which the alphabet symbols of the sequence 124 of alphabet symbols forwarded by the selector 120 to the plurality of entropy encoders 116 result in a beginning of a new alphabet symbol sequence to be mapped to a respective codeword at the respective entropy encoder 116 and a new source symbol of the second substream 108 is mapped by the VLC encoder 102, respectively. In other words, interleaver 128 inserts the codewords of bitstream 112 into the common bitstream 126 in the order of the source symbols from which they have been obtained by VLC encoding, in their order within substream 108 and source symbol stream 106, respectively. Codewords output by entropy encoders 116 are inserted into the common bitstream 126 between consecutive ones of the codewords of the VLC bitstream 112. Owing to the PIPE encoding categorization of the alphabet symbols by assigner 114 and selector 120, respectively, each of the codewords of the entropy encoders 116 have alphabet symbols of different source symbols of substream 110 encoded therein. The position of the codewords of the PIPE encoded bitstreams 118 within the common bitstream 126 among each other and relative to the VLC codeword of bitstream 112 is determined by the first alphabet symbol encoded in each codeword, respectively, i.e. the oldest one in time. The order of these primary alphabet symbols encoded into the codewords of bitstreams 118 in the alphabet symbol stream 124 determines the order of the codewords of bitstreams 118 within the common bitstream 126 among each other; relative to the VLC codewords of bitstream 112, the source symbol to which these primary alphabet symbols encoded into the codewords of bitstreams 118 belong, determine between which consecutive codewords of bitstream 112 the respective codeword of any of bitstreams 118 is to be positioned. In particular, the consecutive VLC codewords between which the respective codeword of any of bitstreams 118 is to be positioned, are those between which the source symbol of substream 110 is positioned in accordance with the original order of un-subdivided source symbol stream 106, to which the respective primary alphabet symbol encoded into the respective codeword of bitstreams 118 belongs. The interleaver 128 may be configured to remove codewords entered into the afore-mentioned codeword entries in sequential order to obtain the common bitstream 126 of interleaved codewords. As has already been described above, the entropy encoders 116 may be configured to sequentially enter their codewords into the codeword entries reserved for the respective entropy encoder 116 and the selector 120 may be configured to forward the alphabet symbols representing the source symbols of the second substream 110 in an order maintaining an order in which the source symbols of the first substream 108 and the second substream 110 were interleaved within the sequence 106 of source symbols.



  • [0045]


    Additional measures may be provided in order to cope with situations where certain ones of the entropy encoders 116 are selected such seldom that it takes to long a time to obtain a valid codeword within that very rarely used entropy encoder 116. Examples for such measures are described in more detail below. In particular, the interleaver 128 along with the entropy encoder 116 may, in this case, be configured to flush their alphabet symbols collected so far and codewords having been entered into the afore-mentioned codeword entries, respectively, in a manner so that the time of this flushing procedure may be forecasted or emulated at the decoding side.



  • [0046]


    At the decoding side, the deinterleaver 230 may act in the reverse sense: whenever, in accordance with the afore-mentioned parsing scheme, the next source symbol to be decoded, is a VLC coded symbol, a current codeword within common bitstream 228 is regarded as a VLC codeword and forwarded within bitstream 206 to VLC decoder 200. On the other hand, whenever any of the alphabet symbols belonging to any of the PIPE encoded symbols of substream 208 is a primary alphabet symbol, i.e. necessitates a new mapping of a codeword of a respective one of the bitstreams 216 to a respective alphabet symbol sequence by the respective entropy decoder 210, the current codeword of common bitstream 228 is regarded as a PIPE encoded codeword and forwarded to the respective entropy decoder 210. The detection of the next codeword border, i.e. the detection of the extension of the next codeword from the end of the codeword just having been forwarded to any of the decoders 200 and 202, respectively, to its end within the inbound interleaved bitstream 228 may be deferred, and be performed under knowledge of, the decoder 200 and 202 being the dedicated recipient of this next codeword in accordance with the above-outlined rule: based on this knowledge, the codebook used by the recipient decoder is known and the respective codeword detectable. If, on the other hand, the codebooks would be designed such that the codeword borders would be detectable without the a-priori knowledge about the recipient decoder among 200 and 202, then the codeword separation could be performed in parallel. In any case, due to the interleaving, the source symbols are available at the decoder in an entropy decoded form, i.e. as source symbols, in their correct order at reasonable delay.



  • [0047]


    After having described above embodiments for an entropy encoding apparatus and a respective entropy decoding apparatus, next more details for the above-mentioned PIPE encoders and PIPE decoders are described.



  • [0048]


    A PIPE encoder is illustrated in

    Fig. 3
    . Same may be used as PIPE encoder in

    Fig. 1a
    . The PIPE encoder losslessly converts a stream of source symbols 1 into a set of two or more partial bitstreams 12. Each source symbol 1 my be associated with a category or type of a set of one or more categories or types. As an example, the categories can specify the type of the source symbol. In the context of hybrid video coding, a separate category may be associated with macroblock coding modes, block coding modes, reference picture indices, motion vector differences, subdivision flags, coded block flags, quantization parameters, transform coefficient levels, etc. In other application areas such as audio, speech, text, document, or general data coding, different categorizations of source symbols are possible. In general, each source symbol can take a value of a finite or countable infinite set of values, where the set of possible source symbol values can differ for different source symbol categories. For reducing the complexity of the encoding and decoding algorithm and for allowing a general encoding and decoding design for different source symbols and source symbol categories, the source symbols 1 are converted into ordered sets of binary decisions and these binary decisions are then processed by simple binary coding algorithms. Therefore, the binarizer 2 bijectively maps the value of each source symbol 1 onto a sequence (or string) of bins 3. The sequence of bins 3 represents a set of ordered binary decisions. Each bin 3 or binary decision can take one value of a set of two values, e.g. one of the values 0 and 1. The binarization scheme can be different for different source symbol categories. The binarization scheme for a particular source symbol category can depend on the set of possible source symbol values and/or other properties of the source symbols for the particular category. Table 1 illustrates three example binarization schemes for countable infinite sets. Binarization schemes for countable infinite sets can also be applied for finite sets of symbol values. In particular for large finite sets of symbols values, the inefficiency (resulting from unused sequences of bins) can be negligible, but the universality of such binarization schemes provides an advantage in terms of complexity and memory requirements. For small finite sets of symbol values, it is often preferable (in terms of coding efficiency) to adapt the binarization scheme to the number of possible symbol values. Table 2 illustrates three example binarization schemes for finite sets of 8 values. Binarization schemes for finite sets can be derived from the universal binarization schemes for countable infinite sets by modifying some sequences of bins in a way that the finite sets of bin sequences represent a redundancy-free code (and potentially reordering the bin sequences). As an example, the truncated unary binarization scheme in Table 2 was created by modifying the bin sequence for the source symbol 7 of the universal unary binarization (see Table 1). The truncated and reordered Exp-Golomb binarization of order 0 in Table 2 was created by modifying the bin sequence for the source symbol 7 of the universal Exp-Golomb order 0 binarization (see Table 1) and by reordering the bin sequences (the truncated bin sequence for symbol 7 was assigned to symbol 1). For finite sets of symbols, it is also possible to use non-systematic / non-universal binarization schemes, as exemplified in the last column of Table 2.

    Table 1: Binarization examples for countable infinite sets (or large finite sets).











































































    symbol valueunary binarizationExp-Golomb order 0 binarizationExp-Golomb order 1 binarization
    01110
    10101011
    20010110100
    300010010 00101
    40000 10010 10110
    50000 010011 00111
    60000 0010011 10010 00
    70000 00010001 0000010 01
    ............



    Table 2: Binarization examples for finite sets.





































































    symbol valuetruncated unary binarizationtruncated and reordered Exp-Golomb order 0 binarizationnon-systematic binarization
    011000
    101000001
    200101001
    300010111000
    40000 10010 01001
    50000 010010 11010
    60000 0010011 01011 0
    70000 0000011 11011 1





  • [0049]


    Each bin 3 of the sequence of bins created by the binarizer 2 is fed into the parameter assigner 4 in sequential order. The parameter assigner assigns a set of one or more parameters to each bin 3 and outputs the bin with the associated set of parameters 5. The set of parameters is determined in exactly the same way at encoder and decoder. The set of parameters may consist of one or more of the following parameters:


    • a measure for an estimate of the probability for one of the two possible bin values for the current bin,

    • a measure for an estimate of the probability for the less probable or more probable bin value for the current bin,

    • an identifier specifying an estimate for which of the two possible bin values represents the less probable or more probable bin value for the current bin,

    • the category of the associated source symbol,

    • a measure for the importance of the associated source symbol,

    • a measure for the location of the associated symbol (e.g. in temporal, spatial, or volumetric data sets),

    • an identifier specifying the channel code protection for the bin or the associated source symbol,

    • an identifier specifying the encryption scheme for the bin or the associated source symbol,

    • an identifier specifying a class for the associated symbol,

    • the bin number in the sequence of bins for the associated source symbol.




  • [0050]


    The parameter assigner 4 may associates each bin 3,5 with a measure for an estimate of the probability for one of the two possible bin values for the current bin. The parameter assigner 4 associates each bin 3,5 with a measure for an estimate of the probability for the less probable or more probable bin value for the current bin and an identifier specifying an estimate for which of the two possible bin values represents the less probable or more probable bin value for the current bin. It should be noted that the probability for the less probable or more probable bin value and the identifier specifying which of the two possible bin values represents the less probable or more probable bin value are equivalent measures for the probability of one of the two possible bin values.



  • [0051]


    The parameter assigner 4 may associates each bin 3,5 with a measure for an estimate of the probability for one of the two possible bin values for the current bin and one or more further parameters (which may be one or more of the above listed parameters). Further, the parameter assigner 4 may associates each bin 3,5 with a measure for an estimate of the probability for the less probable or more probable bin value for the current bin, an identifier specifying an estimate for which of the two possible bin values represents the less probable or more probable bin value for the current bin, and one or more further parameters (which may be one or more of the above listed parameters).



  • [0052]


    The parameter assigner 4 may determine one or more of the above mentioned probability measures (measure for an estimate of the probability for one of the two possible bin values for the current bin, measure for an estimate of the probability for the less probable or more probable bin value for the current bin, identifier specifying an estimate for which of the two possible bin values represents the less probable or more probable bin value for the current bin) based on a set of one or more already encoded symbols. The encoded symbols that are used for determining the probability measures can include one or more already encoded symbols of the same symbol category, one or more already encoded symbols of the same symbol category that correspond to data sets (such as blocks or groups of samples) of neighboring spatial and/or temporal locations (in relation to the data set associated with the current source symbol), or one or more already encoded symbols of different symbol categories that correspond to data sets of the same and/or neighboring spatial and/or temporal locations (in relation to the data set associated with the current source symbol).



  • [0053]


    Each bin with an associated set of parameters 5 that is output of the parameter assigner 4 is fed into a bin buffer selector 6. The bin buffer selector 6 potentially modifies the value of the input bin 5 based on the input bin value and the associated parameters 5 and feds the output bin 7 - with a potentially modified value - into one of two or more bin buffers 8. The bin buffer 8 to which the output bin 7 is sent is determined based on the value of the input bin 5 and/or the value of the associated parameters 5.



  • [0054]


    Tthe bin buffer selector 6 may not modify the value of the bin, i.e., the output bin 7 has always the same value as the input bin 5.



  • [0055]


    The bin buffer selector 6 may determine the output bin value 7 based on the input bin value 5 and the associated measure for an estimate of the probability for one of the two possible bin values for the current bin. The output bin value 7 may be set equal to the input bin value 5 if the measure for the probability for one of the two possible bin values for the current bin is less than (or less than or equal to) a particular threshold; if the measure for the probability for one of the two possible bin values for the current bin is greater than or equal to (or greater than) a particular threshold, the output bin value 7 is modified (i.e., it is set to the opposite of the input bin value). The output bin value 7 may be is set equal to the input bin value 5 if the measure for the probability for one of the two possible bin values for the current bin is greater than (or greater than or equal to) a particular threshold; if the measure for the probability for one of the two possible bin values for the current bin is less than or equal to (or less than) a particular threshold, the output bin value 7 is modified (i.e., it is set to the opposite of the input bin value). The value of the threshold may correspond to a value of 0.5 for the estimated probability for both possible bin values.



  • [0056]


    The bin buffer selector 6 may determine the output bin value 7 based on the input bin value 5 and the associated identifier specifying an estimate for which of the two possible bin values represents the less probable or more probable bin value for the current bin. The output bin value 7 may be set equal to the input bin value 5 if the identifier specifies that the first of the two possible bin values represents the less probable (or more probable) bin value for the current bin, and the output bin value 7 is modified (i.e., it is set to the opposite of the input bin value) if identifier specifies that the second of the two possible bin values represents the less probable (or more probable) bin value for the current bin.



  • [0057]


    The bin buffer selector 6 may determine the bin buffer 8 to which the output bin 7 is sent based on the associated measure for an estimate of the probability for one of the two possible bin values for the current bin. The set of possible values for the measure for an estimate of the probability for one of the two possible bin values may be finite and the bin buffer selector 6 contain a table that associates exactly one bin buffer 8 with each possible value for the estimate of the probability for one of the two possible bin values, where different values for the measure for an estimate of the probability for one of the two possible bin values can be associated with the same bin buffer 8. Further, the range of possible values for the measure for an estimate of the probability for one of the two possible bin values may be partitioned into a number of intervals, the bin buffer selector 6 determines the interval index for the current measure for an estimate of the probability for one of the two possible bin values, and the bin buffer selector 6 contains a table that associates exactly one bin buffer 8 with each possible value for the interval index, where different values for the interval index can be associated with the same bin buffer 8. Input bins 5 with opposite measures for an estimate of the probability for one of the two possible bin values (opposite measure are those which represent probability estimates P and 1 - P) may be fed into the same bin buffer 8. Further, the association of the measure for an estimate of the probability for one of the two possible bin values for the current bin with a particular bin buffer is adapted over time, e.g. in order to ensure that the created partial bitstreams have similar bit rates.



  • [0058]


    The bin buffer selector 6 may determine the bin buffer 8 to which the output bin 7 is sent based on the associated measure for an estimate of the probability for the less probable or more probable bin value for the current bin. The set of possible values for the measure for an estimate of the probability for the less probable or more probable bin value may be finite and the bin buffer selector 6 contain a table that associates exactly one bin buffer 8 with each possible value of the estimate of the probability for the less probable or more probable bin value, where different values for the measure for an estimate of the probability for the less probable or more probable bin value can be associated with the same bin buffer 8. Further, the range of possible values for the measure for an estimate of the probability for the less probable or more probable bin value may be partitioned into a number of intervals, the bin buffer selector 6 determines the interval index for the current measure for an estimate of the probability for the less probable or more probable bin value, and the bin buffer selector 6 contains a table that associates exactly one bin buffer 8 with each possible value for the interval index, where different values for the interval index can be associated with the same bin buffer 8. The association of the measure for an estimate of the probability for the less probable or more probable bin value for the current bin with a particular bin buffer may be adapted over time, e.g. in order to ensure that the created partial bitstreams have similar bit rates.



  • [0059]


    Each of the two or more bin buffers 8 is connected with exactly one bin encoder 10 and each bin encoder is only connected with one bin buffer 8. Each bin encoder 10 reads bins from the associated bin buffer 8 and converts a sequence of bins 9 into a codeword 11, which represents a sequence of bits. The bin buffers 8 represent first-in-first-out buffers; bins that are fed later (in sequential order) into a bin buffer 8 are not encoded before bins that are fed earlier (in sequential order) into the bin buffer. The codewords 11 that are output of a particular bin encoder 10 are written to a particular partial bitstream 12. The overall encoding algorithm converts source symbols 1 into two or more partial bitstreams 12, where the number of partial bitstreams is equal to the number of bin buffers and bin encoders. A bin encoder 10 may convert a variable number of bins 9 into a codeword 11 of a variable number of bits. One advantage of the above- and below-outlined PIPE coding is that the encoding of bins can be done in parallel (e.g. for different groups of probability measures), which reduces the processing time for several implementations.



  • [0060]


    Another advantage of PIPE coding is that the bin encoding, which is done by the bin encoders 10, can be specifically designed for different sets of parameters 5. In particular, the bin encoding and encoding can be optimized (in terms of coding efficiency and/or complexity) for different groups of estimated probabilities. On the one hand side, this allows a reduction of the encoding/decoding complexity relative to arithmetic coding algorithms with similar coding efficiency. On the other hand side, it allows an improvement of the coding efficiency relative to VLC coding algorithms with similar encoding/decoding complexity. The bin encoders 10 may implement different encoding algorithms (i.e. mapping of bin sequences onto codewords) for different groups of measures for an estimate of the probability for one of the two possible bin values 5 for the current bin. The bin encoders 10 may implement different encoding algorithms for different groups of measures for an estimate of the probability for the less probable or more probable bin value for the current bin. Alternatively, the bin encoders 10 may implement different encoding algorithms for different channel protection codes. The bin encoders 10 may implement different encoding algorithms for different encryption schemes. The bin encoders 10 may implement different encoding algorithms for different combinations of channel protection codes and groups of measures for an estimate of the probability for one of the two possible bin values 5 for the current bin. The bin encoders 10 implement different encoding algorithms for different combinations of channel protection codes and groups of measures for an estimate of the probability for the less probable or more probable bin value 5 for the current bin. The bin encoders 10 may implement different encoding algorithms for different combinations of encryption schemes and groups of measures for an estimate of the probability for one of the two possible bin values 5 for the current bin. The bin encoders 10 may implement different encoding algorithms for different combinations of encryption schemes and groups of measures for an estimate of the probability for the less probable or more probable bin value 5 for the current bin.



  • [0061]


    The bin encoders 10 - or one or more of the bin encoders - may represent binary arithmetic encoding engines. One or more of the bin encoders may represent a binary arithmetic coding engine, wherein the mapping from the representative LPS/LPB probability
    p
    LPS of a given bin buffer to a corresponding code interval width
    R
    LPS - i.e. the interval subdivision of the internal state of the binary arithmetic coding engine, which is defined by the current interval width R and the current interval offset L, identifying, for example, the lower bound of the code interval - is realized by using a table lookup. For each table-based binary arithmetic coding engine associated to a given bin buffer,
    K representative interval width values {
    Q0 , ...,
    QK-1 } may be used for representing
    R
    LPS with the choice of
    K and the representative interval width values {
    Q0 , ...,
    QK-1 } being dependent on the bin buffer. For a choice of
    K > 1, arithmetic encoding of a bin may involve the substeps of mapping the current interval width R to a quantization index q with values in{
    0, ...,
    K-
    1} and performing the interval subdivision by accessing the corresponding partial interval width value
    Qq from a lookup table with using q as an index. For a choice of
    K = 1, i.e., for the case where only one representative interval width value
    Q0 is given, this value
    Q0 may be chosen as a power of two in order to allow decoding of multiple MPS/MPB values entering the corresponding bin buffer within a single renormalization cycle. The resulting codewords of each arithmetic coding engine may be separately transmitted, packetized, or stored, or they may be interleaved for the purpose of transmission or storage as described hereinafter.



  • [0062]


    That is, a binary arithmetic coding engine 10 could perform the following steps in coding the bins in its bin buffer 8:

  • QQ群二维码
    意见反馈