#1
  1. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Nov 2012
    Posts
    32
    Rep Power
    2

    Image shreds joiner


    I have this folder which contains over 30 files named from 1 to 30 (not in their sequential order), each file is a slice of a full picture. I need to make a program which builds a list of numbers that represent the correct "order" of the files in order to receive the picture back upon joining the shreds.

    The problem is that there are no "common" columns of pixels between two matching slices, which forces me to work at percentage/ similarity or whatever we could call it.

    My question is, what is a good criteria for similarity between two matching slices?

    Note that I convert the picture to a bitmap format and I am viewing it using a raw matrix in python.
  2. #2
  3. Contributing User
    Devshed Demi-God (4500 - 4999 posts)

    Join Date
    Aug 2011
    Posts
    4,904
    Rep Power
    481
    I answered a similar question here http://forums.devshed.com/python-programming-11/image-stitching-in-python-921559.html except that I answered in executable Iverson notation, which apparently annoyed the person who started the thread. Anyway, you take a good guess and then shift the picture within a reasonable space until it fits best. Now, are there scaling and rotation to consider also? If so, then you've got a more difficult problem because of more variables. You could work with averages of, say, 9 pixel blocks to reduce the data size by a decade.
    Last edited by b49P23TIvg; January 13th, 2013 at 04:46 PM.
    [code]Code tags[/code] are essential for python code and Makefiles!
  4. #3
  5. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Nov 2012
    Posts
    32
    Rep Power
    2
    Originally Posted by b49P23TIvg
    I answered a similar question here http://forums.devshed.com/python-programming-11/image-stitching-in-python-921559.html except that I answered in executable Iverson notation. Anyway, you take a good guess and then shift the picture within a reasonable space until it fits best. Now, are there scaling and rotation to consider also? If so, then you've got a more difficult problem. You could work with averages of, say, 9 pixel blocks to reduce the data size by a decade.

    no rotation or scaling must be taken into consideration. the slices are perfect vertical slices, all I need to do is stitch the slices together and form the picture back to its original state. I am currently working on an "average" algorithm but it is not going too well so far!
  6. #4
  7. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Nov 2012
    Posts
    32
    Rep Power
    2
    anyone?
  8. #5
  9. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Nov 2012
    Posts
    43
    Rep Power
    2
    Originally Posted by Nightmareix35
    anyone?
    This is a really tricky one. I can't give you any code, but here is how I would go about solving the problem..

    Every slice of the image should have have a 'best left' and 'best right' variable which stores the best image it's found for that side.

    I would iterate through every image, and compare it's left most pixels to the right most pixels of the other images and create an average of the differences between hues.

    Every time it finds an image with a smaller average than it's stored value, it updates the stored value with the new images value.

    Then repeat that on the right side.


    As pseudo code:

    Code:
    pictures=array which stores all the images.
    pictureLeft=array the size of pictures, stores best picture
    pictureAvgLeft= array the size of pictures, stores best average
    pictureRight=array the size of pictures, stores best pic
    pictureAvgRight=array the size of pictures, stores best avg
    for i in range(pictures):
        for x in range(pictures):
            if i!=x:
                avg=0
                for z in range(pixels on left of my picture, and pixels on right of other picture - any resizing would be done before this point because the pixels need to line up):
                   avg+=difference between hues
               if avg<pictureAvgLeft[i]: #If the current picture is better..
                    pictureAvgLeft[i]=avg
                    pictureLeft[i]=i
               avg=0
               for z in range(pixels on RIGHT of my picture, and pixels on LEFT of other picture):
                   avg+=difference between hues
               if avg<pictureAvgRight[i]:
                    pictureAvgRight[i]=avg
                    pictureRight[i]=i



    I know it's not ACTUAL code and you'll need to do the real work yourself, but the idea behind my pseudo code above is that you iterate through all the images, find the images with the highest average shared pixels on a certain side, and then that is assumed to be the best image for that side.

    I'm not sure if this process would work but that's how I would attempt to do it.
  10. #6
  11. Contributing User
    Devshed Demi-God (4500 - 4999 posts)

    Join Date
    Aug 2011
    Posts
    4,904
    Rep Power
    481
    oops! I didn't understand the point that the pictures were unordered and needed to be sorted by matches.

    If the pictures were photographed sequentially by date but you've lost the original date maybe you can recover it with the imagemagick identify command.
    [code]Code tags[/code] are essential for python code and Makefiles!
  12. #7
  13. No Profile Picture
    Contributing User
    Devshed Newbie (0 - 499 posts)

    Join Date
    Nov 2012
    Posts
    32
    Rep Power
    2
    Originally Posted by eliskan
    This is a really tricky one. I can't give you any code, but here is how I would go about solving the problem..

    Every slice of the image should have have a 'best left' and 'best right' variable which stores the best image it's found for that side.

    I would iterate through every image, and compare it's left most pixels to the right most pixels of the other images and create an average of the differences between hues.

    Every time it finds an image with a smaller average than it's stored value, it updates the stored value with the new images value.

    Then repeat that on the right side.


    As pseudo code:

    Code:
    pictures=array which stores all the images.
    pictureLeft=array the size of pictures, stores best picture
    pictureAvgLeft= array the size of pictures, stores best average
    pictureRight=array the size of pictures, stores best pic
    pictureAvgRight=array the size of pictures, stores best avg
    for i in range(pictures):
        for x in range(pictures):
            if i!=x:
                avg=0
                for z in range(pixels on left of my picture, and pixels on right of other picture - any resizing would be done before this point because the pixels need to line up):
                   avg+=difference between hues
               if avg<pictureAvgLeft[i]: #If the current picture is better..
                    pictureAvgLeft[i]=avg
                    pictureLeft[i]=i
               avg=0
               for z in range(pixels on RIGHT of my picture, and pixels on LEFT of other picture):
                   avg+=difference between hues
               if avg<pictureAvgRight[i]:
                    pictureAvgRight[i]=avg
                    pictureRight[i]=i



    I know it's not ACTUAL code and you'll need to do the real work yourself, but the idea behind my pseudo code above is that you iterate through all the images, find the images with the highest average shared pixels on a certain side, and then that is assumed to be the best image for that side.

    I'm not sure if this process would work but that's how I would attempt to do it.
    Your idea is quite close to the attempt that I tried. Basically I loop through all the slices checking for matching pixels on the sides. I created a counter which counts the matches and finally puts all the pairs in the list with their corresponding pixel matches, the maximum value is chosen and the rest is discarded and so on for all the slices.

    I have 27 pictures to reconstruct. Each picture is sliced into 70 shreds to be stitched. My code currently managed to reconstruct 60% of the pictures flawlessly, but I am still experiencing problems with the rest (some pictures are "only" partially reconstructed, and others pop up with an error)

    I'm going to try to incorporate an average factor maybe it will increase the accuracy by a bit.

IMN logo majestic logo threadwatch logo seochat tools logo