[Imgcif-l] High speed image compression

Justin Anderson justin at rayonix.com
Fri Jul 29 16:41:48 BST 2011


Thank you everyone for the great suggestions.

Note: I am not including the time to write the compressed data to disk 
intentionally.  I want to test only the compression time and not the 
disk speed.  We will be writing these files to a PCIe solid state drive 
in production.  These drives can write uncompressed frames in real time.

Our goal is to be decently under 100 ms with the 4K (actually 1920 x 
1920), 2 byte images to keep up at 10 fps.

On an Intel Core i7 940 processor the same code runs in 50 - 60 ms.

Some new runtimes (on the Core i7):
    Reserving the vector space for the compressed data ahead of time:
       40 - 50 ms
    Adding compressed data via address instead of push_back:
       30 - 40 ms

Hopefully with the image correction time and transfer times this will work.

~Justin

On 7/29/11 9:40 AM, Herbert J. Bernstein wrote:
> And you can gain a little more speed once you preallocate by
> switching internally from indexed references to Vectors to
> indexed references to C pointers to the same Vectors,
> e.g.
>
>       const int16_t * vptr;
>       char * pptr;
>       vptr =&values[0];
>
> and, after you preallocate packed
>
>       pptr =&packed[0];
>
>
> At 6:53 AM -0400 7/29/11, Herbert J. Bernstein wrote:
>> I agree.  On my Mac, the time also drops sharply with pre-allocation and []
>> instead of push_back.
>>
>>
>> At 10:51 AM +0200 7/29/11, Jonathan WRIGHT wrote:
>>> Dear Justin,
>>>
>>> Your code counts the time compressing, but not the time writing the
>>> file, which is much longer for me. As it stands, you might gain a little
>>> by adding "packed.reserve(size*2)" just before the call to compress (54
>>> to 38 ms here on vista64, 3.3 Ghz). That falls further (28 ms) if you
>>> stop using "push_back" and instead allocate something which is
>>> "certainly" large enough to start with and use packed[p++]=c.
>>>
>>> Cheers,
>>>
>>> Jon
>>>
>>> On 29/07/2011 00:36, Justin Anderson wrote:
>>>>    Thanks Nicholas.
>>>>
>>>>    I only made a couple small changes to Graeme's code. 1: to load an image
>>>>    from a file and write to file and 2: to pass the data vectors by
>>>>    reference. The last change seems to have sped things up a little but
>>>>    it's still taking 110 - 130 ms to compress which is too slow. We are not
>>>>    as concerned with decompression speed as that will not need to occur in
>>>>    real-time.
>>>>
>>>>    I put on our FTP here:
>>>>    ftp://ftp.rayonix.com/pub/del_in_30_days/byte_offset.tgz.
>>>>
>>>>    Thanks,
>>>>
>>>>    Justin
>>>>
>>>>    On 7/28/11 2:06 PM, Nicholas Sauter wrote:
>>>>>    Justin,
>>>>>
>>>>>    Just some comments based on our experience...first, I haven't tried the
>>>>>    compression extensively, just the decompression. But I've found Graeme's
>>>>>    decompression code to be significantly faster than the CBF library, first
>>>>>    because it is buffer-based instead of file-based, and also because it
>>>>>    hard-codes some assumptions about data depth.
>>>>>
>>>>>    I'd be happy to examine this in more detail if there is some way to share
>>>>>    your code example...
>>>>>
>>>>>    Nick
>>>>>
>>>>>    On Thu, Jul 28, 2011 at 11:46 AM, Justin
>>>>>    Anderson<justin at rayonix.com>wrote:
>>>>>
>>>>>>    Hello all,
>>>>>>
>>>>>>    I have run Graeme's byte offset code on a 4k x 4k (2 byte depth)
>>>>>>    Gaussian
>>>>>>    noise image and found it to compress the image in around 150 ms (64-bit
>>>>>>    RHEL, Pentium D 3.46GHz). Using CBF library with byte offset
>>>>>>    compression, I
>>>>>>    find the compression takes around 125 ms.
>>>>>>
>>>>>>    This will be too slow to keep up with our high speed CCD cameras. We are
>>>>>>    considering parallelizing the byte offset routine by operating on
>>>>>>    each line
>>>>>>    of the image individually. Note that this would mean that a given
>>>>>>    compressed image would be stored differently than via the whole image
>>>>>>    algorithm.
>>>>>>
>>>>>>    Has anyone been thinking about this already or does anyone have any
>>>>>>    thoughts?
>>>>>>
>>>>>>    Regards,
>>>>>>
>>>>>>    Justin
>>>>>>
>>>>>>    --
>>>>>>    Justin Anderson
>>>>>>    Software Engineer
>>>>>>    Rayonix, LLC
>>>>>>    justin at rayonix.com
>>>>>>    1880 Oak Ave. #120
>>>>>>    Evanston, IL, USA 60201
>>>>>>    PH:+1.847.869.1548
>>>>>>    FX:+1.847.869.1587
>>>>>>
>>>>>>
>>>>>>    _______________________________________________
>>>>>>    imgcif-l mailing list
>>>>>>    imgcif-l at iucr.org
>>>>>>    http://scripts.iucr.org/mailman/listinfo/imgcif-l
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>    _______________________________________________
>>>>    imgcif-l mailing list
>>>>    imgcif-l at iucr.org
>>>>    http://scripts.iucr.org/mailman/listinfo/imgcif-l
>>> _______________________________________________
>>> imgcif-l mailing list
>>> imgcif-l at iucr.org
>>> http://scripts.iucr.org/mailman/listinfo/imgcif-l
>>
>> --
>> =====================================================
>>    Herbert J. Bernstein, Professor of Computer Science
>>      Dowling College, Kramer Science Center, KSC 121
>>           Idle Hour Blvd, Oakdale, NY, 11769
>>
>>                    +1-631-244-3035
>>                    yaya at dowling.edu
>> =====================================================
>> _______________________________________________
>> imgcif-l mailing list
>> imgcif-l at iucr.org
>> http://scripts.iucr.org/mailman/listinfo/imgcif-l
>


More information about the imgcif-l mailing list