#1
  1. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2007
    Posts
    44
    Rep Power
    8

    Data correction methods resistant to pessimistic cases


    Standard data correction methods takes some blocks of the data and enlarge it to protect against some specific set of errors. We loose whole block if we get out of this set.
    For example Hamming (7,4) uses 3 additional checksum bits for 4 bits of information - it works fine when there is at most 1 error per block of 7 bits.
    We use quite large redundancy, but is it safe now? The problem is with pessimistic cases - if expected error rate is 1/100 bits, still quite often it can happen that we have 2 errors in some of 7 bit blocks.

    To be able to correct errors we are adding usually constant density of redundancy, but the density of errors usually isn't constant - can fluctuate - sometimes is above average, sometimes beyond.
    When is above - we need a lot of redundancy to cope with these pessimistic cases.
    When is beyond - we've used there more redundancy than it was really needed - we waste some capacity.

    The idea is to somehow transfer these unused surpluses of redundancy to help with difficult cases.
    Thanks of it instead of adding redundancy according to pessimistic error density, we could use for a bit above average density, which is usually a few orders of magnitude smaller.
    To do this we cannot place data in independent blocks (like 7 bits in Hamming)- their redundancy is also independent.
    We should use a coding which allow to hide redundancy so that each piece of it affects large part of the message.

    I'll show now an example how it can be done.
    Unfortunately it has various latency - simple errors can be quickly corrected, but more complicated may take long times... maybe there are better ways?

    The idea is to use a very precise coding - such that any error would make that the following decoded sequence should be completely random bit sequence.
    For example a block ciphers, which uses previous block to calculate the following one ... but there is much better coding for it I will say later about.

    Now - add to the information some easily recognizable redundancy - for example insert '1' between each digit (each such bit is hidden in all succeeding bits of encoded message).
    If while decoding it occurs that there is '0' in one of these places - that means we had some error before and most probably it's nearby.
    Knowing statistical characteristics of expected errors, we can make list of most possible errors in such cases, sort them by their probability - on the top of this list there should be 'switched previous bit',... after a while there can appear 'switched two bits:...'. This list can be very large.
    Now if we know that there (nearby) appeared some error, we take this list position by position, correct as it was really this case (switch some bits) and try to decode further fixed number of bits (a few dozens).
    If everything is ok - we get only '1' on selected positions - we can assume that it was this error.
    If not - try the next one from the list.
    This list can be generated online - using large amount of time we could repair even badly damaged transmission.
    While creating the list, we have to remember that errors can appear also in the succeeding bits.

    Using block ciphers is a bit nasty - they are slow, we have large blocks to search for errors ...
    There is new coding just ideal for above purpose - Asymmetric Numeral Systems (ANS)- new entropy coder, which has very nice properties for cryptography ... and data correction - it's much faster than block ciphers and uses small blocks of various length.
    Here for example is demonstration about it:
    http://demonstrations.wolfram.com/Da...umeralSystems/

    What do You think about it?
  2. #2
  3. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2007
    Posts
    44
    Rep Power
    8

    Uniform redundancy distribution by introducing a forbidden symbol.


    We can use ANS entropy coding property to make above process quicker and distribute redundancy really uniformly:
    to create easily recognizable pattern, instead of inserting '1' symbol regularly, we can add a new symbol - the forbidden one.
    If it occurs, we know that something was wrong, the nearer the more probable.

    Let say we use symbols with some probability distribution (p_i), so we at average need H = -sum_i p_i lg p_i bits/symbol.
    For example if we want just to encode bytes without compression, we can threat it as 256 symbols with p_i=1/256 (H = 8 bits/symbol).

    Our new symbol will have some chosen probability q. The nearer to 1 it is, the larger redundancy density we add, the easier to correct errors.
    We have to rescale the rest of probabilities: p_i ->(1-q) p_i.
    In this way, the size of the file will increase r = (H - lg (1-q) )/H times.

    Now if while decoding we get the forbidden symbol, we know that,
    - with probability q, the first uncorrected yet error has occurred in some of bits used to decode last symbol,
    - with probability (1-q)q it occurred in bits used while decoding the previous symbol,
    - with probability (1-q)^2 q ...

    The probability of succeeding cases drops exponentially, especially if (1-q) is near 0.
    But the number of required tries also grows exponentially.
    But observe that for example all possible distributions of 5 errors in 50 bits is only about 2 millions - it should be checked in a moment.

    Let's compare it to two well known data correction methods: Hamming 4+3 (to store 4 bits we use additional 3 bits) and tripling each bit (1+2).
    Taking the simplest error distribution model - for each bit the probability that it is switched is constant, let say e=1/100.
    So the probability that in 7 bit block we have at least 2 errors, is
    1 - (1-e)^7 - 7e(1-e)^6 =~ 0.2%
    For 3 bit block it's about 0.03%
    So for each kilobyte of data we irreversibly loose: 4*4=16 bits in Hamming 4+3, 2.4 bits for tripling bits.
    We see that even for looking to be well protected methods, we loose a lot of
    data because of pessimistic cases.

    For ANS based data correction:
    4+3 case (r=7/4) - we add forbidden symbol with probability q=1-1/2^3=7/8, and each of 2^4=16 symbols has probability 1/16*1/8=1/128.
    In practice ANS works best if lg(p_i) aren't natural numbers, so q should (not necessary) be not exactly 7/8 but something around.
    Now if the forbidden symbol occurs, with probability about 7/8 we only have to try to switch one of (about) 7 bits used to decode this symbol.
    With 8 times smaller probability we have to switch 7 bits from the previous one... with much smaller probability, depending on the error density model, we should try to switch some two bits ... and even extremely pessimistic cases looks to take reasonable time to correct them.
    For 1+2 case (r=3), the forbidden symbol has about 3/4, and 0,1 has 1/8 each.
    With probability 3/4 we have only to correct one of 3 bits ... 255/256 one of 12 bits ...

    -----

    There is some problem - in practice coding/decoding tables should fit into cache, so we can use at most about million of states.
    While correcting trying thousands of combination, we could accidentally get the correct state with wrong correction - a few bits would be corrected in wrong way and we wouldn't even notice it.
    To prevent it we can for example use two similar stages of ANS - the first creates bytes and the second convert the first to the final sequence.
    The second would get uniformly distributed bytes, but ANS itself creates some small perturbations and it will work fine.
    Thanks of this the number of states grows to the square of initial one, reducing this probability a few orders of magnitude at the cost of double time requirements.
    We could use some checksum to confirm it ultimately.
  4. #3
  5. Contributing User
    Devshed Intermediate (1500 - 1999 posts)

    Join Date
    Feb 2007
    Posts
    1,940
    Rep Power
    3117

    Smile


    Originally Posted by Jarek Duda
    What do You think about it?

    You are going to need to be more specific with your question, you put a significant amount of background information and ask a broad question.

    Here are two examples:

    1. If I said, I like Atlanta, what do you think about it?

    2. If I were to discuss the details of eating a hot dog, with small amounts of mustard and chilli while trying to pedal a unicycle, that has a tire pressure of 25 PSI, which is above the recommend level, but allowed me to make fantastical back flip maneuvers when the pizza delivery guy almost runs me over because he is too busy sending a hate text to his Senator about the bailouts. What do you think about it?

    The first example, it is clear what I am asking, in the second example, the question is too vague.

    Can you tell from my second example that I am ready for lunch?
  6. #4
  7. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2007
    Posts
    44
    Rep Power
    8
    I don't know Atlanta and I think You should be more careful...
    As other people have probably found out, 'it' from my question referred to new approach to data correction I've described before...

    -------------

    I've just realized that we can use huge freedom of choice for the functions for ANS to improve latency - we can make that if the forbidden symbol occurs, we are sure that if there was only single error, it was among bits used to decode this symbol.
    Maybe we will have to go back to the previous ones, but only if there were at least 2 errors among these bits - it's an order of magnitude less probable than previously.
    The other advantage is that if we would try to verify wrong correction by trying to decode further, single error in block will automatically tell us that it's wrong correction. There could be 2 errors, but they are much less probable, we can check it much later.

    The trick is that the forbidden symbol usually dominate in the coding tables, so we can make that if for given transferred bits we would get allowed symbol, for each sequence differ on one bit (Hamming distance 1) we would get the forbidden symbol.

    So to make the initialization, we choose some amounts of the allowed symbols and we have to place them somehow.
    For example: take unplaced symbol, place it in random unused position (using list of unused positions), and place the forbidden symbol on each state differing on one bit of 'some' last ones.
    This 'some' is a bit tricky - it has to work assuming that previously only allowed symbols were decoded, but it could be any of them.
    If we are not making compression - all of them are equally probable, this 'some' is -lg(p_i) plus minus 1. Plus for high states, minus for low.

    There should remain some states unused after this procedure. We can fill them with forbidden symbols or continue above procedure, inserting more allowed symbols.
    This random initialization still leaves huge freedom of choice - we can still use it to additionally encrypt the data, using random generator initialized with the key.
    If want data correction only, we can use that in this procedure many forbidden symbols are marked a few times, the more the smaller the output file ... with a bit smaller but comparable safeness.
    So we could consciously choose some good schemes, maybe even that uses Hamming distance 2 (or grater) - to go back to previous symbol there would have to occur 3 errors.

    For example 4+3 scheme seems to be perfect: we transfer at average 7 bits, and for every allowed symbol there occurs 7 forbidden ones.
    For some high states like 111******** (stars are the transferred bits) we have to place 8 forbidden symbols, but for low like 10000****** we can place only six.
    Some of forbidden states will be marked a few time, so we should make whole procedure, eventually use a bit less amount allowed symbols (or more).
  8. #5
  9. Contributing User
    Devshed Intermediate (1500 - 1999 posts)

    Join Date
    Feb 2007
    Posts
    1,940
    Rep Power
    3117
    Originally Posted by Jarek Duda
    I don't know Atlanta and I think You should be more careful...
    As other people have probably found out, 'it' from my question referred to new approach to data correction I've described before...
    Atlanta was an example, you don't need to "know" it. Careful for what?

    What people, you and I are the only ones posting on this thread? It sounds like you don't have really have a question you are just making random posts, so I will leave you be.
  10. #6
  11. Contributing User
    Devshed Intermediate (1500 - 1999 posts)

    Join Date
    Feb 2007
    Posts
    1,940
    Rep Power
    3117
    Jarek Duda, based on your history, it seems you don't really want to discuss your posts:

    http://forums.devshed.com/security-and-cryptography-17/cryptosystem-based-on-asymmetric-numeral-systems-497071.html
  12. #7
  13. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2007
    Posts
    44
    Rep Power
    8
    So please look at this link and answer if I've left someone's post without comment? (if I could...)
    Look at usual reactions at this security forum for cryptography concepts. What do You think it means if in such place there is no answer to question: how to break it?
    You can check the demonstration if it works, or look here .

    If You want ask a question not connected to the topic of this thread You can make new one, but I thought it would be polite to answer them...
    Now I will gladly, maturely discuss the idea I've presented - data correction methods which doesn't place information in independent blocks, but allow to use redundancy unused somewhere else to help with pessimistic cases.

    ---------------------------------

    I've just realized that Hamming, tripling bits are some special (degenerated) cases of ANS based data correction
    In the previous post I gave arguments that it would be beneficial if any two allowed states would have Hamming distance at least 2.
    If we would make that this distance is at least 3, we could unambiguously instantly correct single error as in Hamming.

    To get tripling bit from ANS we use:
    states from 1000 to 1111
    Symbol '0' is in 1000 state, symbol '1' is in 1111 (Hamming distance is 3) and the rest six states have the forbidden symbol.
    We have only 1 appearance of each allowed symbol, so after decoding it, before bit transfer the number of state will always drop to '1' and three youngest bits will be transferred from input.

    To get Hamming 4+3,
    states are from 10000000 to 11111111
    We have 16 allowed symbols from '0000' to '1111', each one has exactly one appearance - the state 1*******, where stars are 7 bits it would be coded in Hamming - two different has Hamming distance at least 3.
    After coding the state drops to '1' again and this '1' will be the oldest bit after bit transfer.

    The fact that each allowed symbol has only one appearance, makes that after decoding we each time drops to '1' - it's kind of degenerated case - all blocks are independent, we don't transfer any redundancy.
    It can handle with great error density, like 1/7 (for Hamming 4+3) ... but only while in each block is at most 1 error.
    In practice errors doesn't come in such regularity and even with much smaller error density, Hamming looses a lot of data (like 16 bits per kilobyte for 0.01 error probability).

    Let's think about theoretical limit of bits of redundancy we have to add for bit of information for assumed statistical error distribution to be able to full correct the file.
    To find this threshold, let's think about simpler looking question: how many information is stored in such uncertain bit?
    Let's take the simplest error distribution model - for each bit probability that it's switched is equal e (near zero), so if we see '1' we know that with probability 1-e it's really '1', and with probability e it's 0.
    So if we would know which of this cases we have, what is worth
    h(e)=-e lg(e) - (1-e) lg(1-e)
    we would have whole bit.
    So such uncertain bit is worth 1-h(e) bits.
    So to transfer n real bits, we have to use at least n/(1-h(e)) these uncertain bits - the theoretical limit to be able to read a message is (asymptotically)
    h(e)/(1-h(e)) additional bits of redundancy /bit of information.

    So a perfect data correction coder for e=1/100 error probability, would need only additional 0.088 bits/bit to be able to restore message.
    Hamming 4+3 in spite of using additional 0.75 bits/bit, looses 16bits/kilobyte with the same error distribution.

    Hamming assumes that every 7bit block can come in 8 ways - correct or with changed one of 7 bits.
    It uses the same amount of information to encode each of them, so it would add at least lg(8 )=3 bits of redundancy in each block - we see it's done optimally...
    ... but only if the probability of all of this 8 ways would be equal for this error distribution...
    In practice the most probably we would have the possibility without error, later with one error ... and with much smaller possibilities with more errors ... depending how does error distribution in our medium looks like.

    To go into the direction of the perfect error correction coder, we have to break with uniform distribution of cases like in Hamming and try to correspond to real error distribution probabilities.

    If the intermediate state for ANS based data correction could have many values, we would transfer some redundancy - the 'blocks' would be somehow connected and if in one of them would occur more errors, we could use this connection to see that something is wrong and use some unused redundancy from succeeding blocks to correct it - we use the assumption that according to error distribution, the succeeding blocks with large probability are correct.
    We have huge freedom while choosing ANS parameters to get closer to the assumed probability model of error distribution ... to the perfect data correction coder.
    Intuitively (?) using ANS with two allowed symbols and properly chosen functions we could come close to this limit for uniform error distribution (-lg(1-q)>h(e)/(1-h(e)) ), but it would require large number of states ... and enormous amount of computations...
  14. #8
  15. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2007
    Posts
    44
    Rep Power
    8
    I've just finished large paper about ANS. There is added some deeper analysis, gathered rethinked information I've placed on different forums...
    There is also shown that presented data correction approach can really allow to reach theoretical Shannon's limit and looks to have expected linear time
    http://arxiv.org/abs/0902.0271

    The only known 'practical' approach - Low density parity check distributes uniformly huge amount of checksum bits - it takes a long time (N^2 or more) to distribute them and solving NP problem to find the proper correction and still we are not reaching theoretical limit.
    Presented method works in completely different way - it can be imagined as path tracking - we know starting and ending position and we want to walk between them using the proper path. When we use this path everything is fine, but when we lost it, in each step we have selected probability to became conscious of this fact. Now we can go back and try to make some correction. We pay for this parameter with capacity, but if this probability is chosen higher than some threshold corresponding exactly to Shannon's limit, the number of corrections we should try doesn't longer grow exponentially, but polynomially and so we can easily verify if it was the proper correction ... checking all of them in pessimistically polynomial time.

    ps. I’ve been asked to precise the correction algorithm - there is new version of the paper available (with a few required corrections) - the same link.

    pss. I've just spotted that there is a problem with combining data correction with the method of enlarging internal state in which we switch some youngest bits - these bits are immediately transferred and be used in the next cycle - we would have to locate them...
    To make correction simpler, there should be changed order: step of coding is bit transfer then switching the youngest bit (or a few) of this reduced intermediate state then encode the symbol.
    To make that this bit switch doesn't lead outside I_s, we have to use l_s being even numbers (or correspondingly higher powers of 2).
    Last edited by Jarek Duda; February 27th, 2009 at 10:48 AM.
  16. #9
  17. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2007
    Posts
    44
    Rep Power
    8
    The simulator of correction process has just been published on Wolfram's page:
    http://demonstrations.wolfram.com/CorrectionTrees/
    Is shows that we finally have near Shanon's limit method working in nearly linear time for any noise level.

    For given probability of bit damage (p_b), we choose p_d parameter. The higher this parameter is, the more redundancy we add, the easier to correct errors.
    We want to find the proper correction (red path in simulator). The main correction mechanism is that if we are expanding the proper correction - everything is fine, but in each step of expanding a wrong correction, we have p_d probability of realizing it. With p_d large enough, the number of corrections we should check doesn't longer grow exponentially.
    At each step there is known tree structure and using it we choose the most probable leaf to expand.

    I've realized that for practical correction methods (not requiring exponential correction time), we rather need a bit more redundancy than theoretical (Shannon's) limit. Redundancy allows to reduce the number of corrections to consider. In practical correction methods we rather have to elongate corrections and so we have to assume that the expected number of corrections up to given moment is finite, what requires more redundancy than Shannon's limit (observe that block codes fulfills this assumption).
    This limit is calculated in the last version of the paper (0902.0271). The basic correction algorithm (as in the simulator) works for a bit worse limit (needs larger encoded file by at most 13%), but it can probably be improved.
    Finally this new family of random trees has two phase transitions - for small p_d < p_d^0 the tree will immediately expand exponentially. For p_d^0 < p_d < p_d^2 the tree has generally small width, but rare high error concentrations make that its expected width is infinite (like long tail in probability distribution). For p_d>p_d^2 it has finite expected width.

    Used today error correction methods works practically only for very low noise (p_b < 0.01). Presented approach works well for any noise (p_b < 0.5).
    For small noises it needs size of encoded file practically like for Shannon's limit. The difference starts for large noises: it needs file size at most twice larger than the limit.
    Practical method for large noises give new way to increase capacity of transmition lines and storage devices - for example place two bits where we would normally place one - the cost is large noise increase, but we can handle it now.

    For extremely large noises, we can no longer use ANS. On fig. 3 of the paper is shown how to handle it. For example if we have to increase the size of the file 100 times, we can encode each bit in 100 bits - encode 1 as (11...111 XOR 'hash value of already encoded message'. The same with 0. Now while creating the tree, each split will have different number of corrected bits - different weight.
    Last edited by Jarek Duda; June 24th, 2009 at 10:01 AM.
  18. #10
  19. No Profile Picture
    Contributing User
    Devshed Intermediate (1500 - 1999 posts)

    Join Date
    Feb 2004
    Location
    San Francisco Bay
    Posts
    1,939
    Rep Power
    1313
    I'm not an expert on error-correcting codes. Most of us aren't. I could still try to read your paper and give comments (I'm not completely new to the field), but it would take a lot of effort on my part (especially considering the language barrier), and how much would you really gain by that? If you want valuable feedback, submit your paper to a peer-reviewed journal or a conference. Both will give you the opportunity to have your ideas vetted by the experts. Some of them might be able to determine the merit of your approach (or the lack of it) quickly, unlike me, who would have to undertake a careful and painful analysis.

    In case anybody doesn't know: the arXiv, where the OP has posted his paper, has no peer-review process. It's good for what it is, but there is no pretense of merit.
  20. #11
  21. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2007
    Posts
    44
    Rep Power
    8
    It cost too much nerves and patience to go trough peer-review process as not known author with papers made of condensed new ideas ... for example lately after a year and a half I've got positive reviews, but the chief said that the journal changed strategy for paper selection and "fractals" don't longer want papers about fractals ...
    So I for example make (opensource) demonstrations which clearly shows that everything works fine, I can explain it on forums ...
    This paper is my first PhD paper - it will go through 2 reviewers in a few months.

    About modern error correction methods: very near theoretical limit are LDPC, but they rather require exponential correction time and so practically isn't used.
    The best used are Turbo Codes and algebraic tricks like Reed-Salomon, but these codes uses independent blocks - they cannot completely remove noise (they only reduce it), they rather works only for small noises and they are not too fast (especially for very large blocks what is necessary to go near Shannon's limit).

    Presented approach shows how to connect redundancy of such blocks - now we can literally transfer unused redundancy to help with large local error concentrations (see width mode in the simulator). Thanks of it it's even not important how we will add this redundancy ...
    It can be also though as spreading checksums uniformly over the message.
    ... and we get very nice math of random trees, which really aren't complicated to define - in each moment we know structure of the tree and we want to find the most probable leaf to expand. Just run the simulator and make a few steps ...
    And discrete math is more helpful than a background in theory of information. Everything needed is in the paper.

    Thanks for response and please ask if You would have any questions.
  22. #12
  23. No Profile Picture
    Contributing User
    Devshed Intermediate (1500 - 1999 posts)

    Join Date
    Feb 2004
    Location
    San Francisco Bay
    Posts
    1,939
    Rep Power
    1313
    Originally Posted by Jarek Duda
    It cost too much nerves and patience to go trough peer-review process as not known author with papers made of condensed new ideas ... for example lately after a year and a half I've got positive reviews, but the chief said that the journal changed strategy for paper selection and "fractals" don't longer want papers about fractals ...
    Yeah, it might be nerve-wracking, but it is the proper way. Even just bouncing your ideas off colleagues in your department would be better than what you're doing now.
    Originally Posted by Jarek Duda
    So I for example make (opensource) demonstrations which clearly shows that everything works fine, I can explain it on forums ...
    Demonstrations are valuable in their own way, but they're no substitute for proofs, and proofs need to be verified by the experts (unless they're simple enough to be understood by a general mathematical audience).
    Originally Posted by Jarek Duda
    About modern error correction methods: very near theoretical limit are LDPC, but they rather require exponential correction time and so practically isn't used.
    The best used are Turbo Codes and algebraic tricks like Reed-Salomon, but these codes uses independent blocks - they cannot completely remove noise (they only reduce it), they rather works only for small noises and they are not too fast (especially for very large blocks what is necessary to go near Shannon's limit).
    Speaking of that, I noticed that you mentioned the "4+3" Huffman code and the triple redundancy code repeatedly. They're valuable pedagogically, perhaps, but as far as I know, they're not very useful. You'd get people's attention a lot faster if you compared your method to the state of the art. I think Reed-Solomon codes are used today in some applications, but I'm not sure. You would need to do a hard scientific comparison; vague statements like "they cannot completely remove noise (they only reduce it)" are useless. (As stated, that sentence applies to all error-correction methods).
    Originally Posted by Jarek Duda
    Presented approach shows how to connect redundancy of such blocks - now we can literally transfer unused redundancy to help with large local error concentrations (see width mode in the simulator). Thanks of it it's even not important how we will add this redundancy ...
    It can be also though as spreading checksums uniformly over the message.
    Interesting, but very vague. It may capture the reader's imagination, but that shouldn't be your primary goal if you want your work to be accepted by the scientific community.

    Posting on an internet forum like this is a terrible way to get your work accepted into the mainstream. We're not the scientific community; most of us are just programmers. The correct approach, of course, is to present your work directly to the people who are in a position to judge it, such as people who review the appropriate journals, the audiences of those journals, people who attend conferences in the field, and generally fellow mathematicians & computer scientists who are at least well acquainted with the field. If you insist on proceeding through the internet, then there are probably some mailing lists that are more appropriate than this forum.

    Let's be serious now: if your approach is genuinely novel and has merit, then you should have no trouble at all getting it accepted into a good journal; there's really no excuse. What you're presenting is a method for detecting and correcting errors, with some constraints on the types of errors that occur. If your method is really practical, then it should be very easy to demonstrate that just with experimental data. If in addition to the data you have rigorous proofs to explain the experiments, then you have a compelling case that should seize the attention of your audience.

    Comments on this post

    • jwdonahue agrees : Reed-Solomon codes fairly common today in deep space communications networks and I've even used them over noisy terestrial communications channels.
  24. #13
  25. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2007
    Posts
    44
    Rep Power
    8
    I really would gladly cooperate with somebody ... but I'm usually working on some own new ideas, jumping between completely different subjects ... and for example theory of information is the base of whole date transition, storage ... but it occurred that its large problem to find two reviewers in Poland - nobody is interested in research in this field ... and so its not surprising that there was the simplest entropy coder missing ...
    About proofs - sometimes the best one is working program or good intuition - like in ANS case ... there is even too much randomness there for practical formal proofs ...

    About examples from the paper - they were only pedagogical. I've tried to find some precise information about implementations of 'the state of art' methods, but they are usually just secret...
    I'm explaining why independent blocks (Turbo codes, Reed-Salomon) cannot be very good and show completely general method of connecting their redundancy to be able to transfer its surpluses between blocks - improve any independent block codes.

    "They only reduce noise", means that for independent blocks, with some probability we exceed error level that can be repaired for a block - some part of blocks will be still damaged - we only reduce noise. Like in 0->000 - sometimes there will be two damages and we will correct in wrong way not realizing it.
    Connecting redundancy (and LDPC) is made to completely repair the message - the probability of that retransmission will be needed can be chosen as small as we want.
    It may capture the reader's imagination, but that shouldn't be your primary goal if you want your work to be accepted by the scientific community.
    I value good imagination higher than a formal proof.
    And I value ideas being widely used higher than being accepted by a few theoreticians.
    ... and opensource simulator higher than some mysterious table of experimental results ...
    Simulator is program which solutions can be used by programmers and which is made to build intuition - everybody can see that it literally gather surrounding redundancy to cope with errors ...
    Ok - I have traumatic experience about peer-review journals ... I prefer to use this time on working on something else ... and there are different methods to get to 'specialists' ...
    But these ideas are usually really simple and nice - can be understood and appreciated by anyone with some mathematical background and a bit of good will ...
    I know - there are some well known ways of doing different things ... unfortunately I had always problems with fitting myself there, had to find my own ... but fortunately I don't have to regret anything (yet?)

    -----------------------------

    The problem that the expected width of the tree is infinite for p_d<p_d^2 is in fact only theoretical - for infinite messages.
    In practice we use finite blocks and have to agree to some probability of irreversible damage of a block.
    Choosing this probability as small as we want, makes 'cut off' in the integrals giving infinite width - now width is finite: for any p_d>p_d^0 we can choose large enough number of states (like 2^(10+64)) to make probability of undetectable wrong correction much smaller than the chosen probability of very large local error concentration.
    Now if while correction we exceed some chosen number of steps, we say that we cannot correct this block and for example ask for retransmission of this block or its part.
    Eventually if the data is extremely important, we could try to additionally correct it in reverse direction (from 'the final state') and finally try to glue both corrections. Such 'backward' correction usually require much more calculation.

    To summarize - in practice connected redundancy allows to get as near Shannon's limit as we want in linear time (the nearer the larger constant).
    Last edited by Jarek Duda; July 17th, 2009 at 10:11 AM.
  26. #14
  27. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2007
    Posts
    44
    Rep Power
    8
    I defended my PhD in computer science (now in physics in progress ), which half was was about this new approach to data correction - basically it's extended convolutional codes concept to use much larger states (which require working not on the whole space of states as usual, but only on used tree of states and allows to practically complete repair in linear time) and using entropy coder to add redundancy (simultaneous data compression and rate can be changed fluently).
    The thesis is the paper from arxiv with a few small improvements ( can be downloaded from http://tcs.uj.edu.pl/graduates.php?degree=1&lang=0 )
    Here is presentation I've used - with a few new pictures which could make understanding concepts easier (and asymmetric numeral systems) and e.g. some comparison to LDPC.
    Last edited by Jarek Duda; December 11th, 2010 at 02:48 AM.
  28. #15
  29. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Dec 2007
    Posts
    44
    Rep Power
    8
    I apology for digging this thread up, but finally there is practical implementation and it beats modern state of arts methods in many application. It can be seen as greatly improved convolutional code-like concept – for example no longer using convolution, but carefully designed extremely fast operation allowing to work on much larger states instead. Other main improvements are using bidirectional decoding and heap (logarithmic complexity) instead of stubbornly used stack (linear complexity). For simplicity it will be called Correction Trees (CT).
    The most important improvement is that it can handle larger channel damage for the same rate. Adding redundancy to double (rate ½) or triple (rate 1/3) the message size theoretically should allow to completely repair up to correspondingly 11% or 17.4% damaged bits for Binary Symmetric Channel (each bit has independently this probability to be flipped). Unfortunately, this Shannon limit is rather unreachable - in practice we can only reduce Bit Error Rate (BER) if noise is significantly lower than this limit. Turbo Codes (TC) and Low Density Parity Check (LDPC) are nowadays seen as teh best methods – here is comparison of some their implementations with CT approach – output BER to input noise level:



    We can see that CT still repairs when the others has given up – making it perfect for extreme application like far space or underwater communication. Unfortunately repairing such extreme noises requires extreme resources – software implementation on modern PC decodes a few hundreds bytes per second for extreme noises. Additionally, using more resources the correction capability can be further improved (lowering line in the figure above).
    From the other side, CT encoding is extremely fast and correction for low noises is practically for free – like up to 5-6% for rate ½. In comparison, TC correction always requires a lot of calculation, while LDPC additionally requires also a lot of work for encoding only.
    So in opposite to them, CT is just perfect for e.g. hard discs – everyday work uses low noise, so using CT would make it extremely cheap. From the other hand, if it is really badly damaged, there is still correction possible but it becomes costly. Such correction itself could be also made outside, allowing for further improvement of correction capabilities.

    Paper: http://arxiv.org/abs/1204.5317
    Implementation: https://indect-project.eu/correction-trees/
    Simulator: http://demonstrations.wolfram.com/CorrectionTrees/

IMN logo majestic logo threadwatch logo seochat tools logo