[Imgcif-l] High speed image compression

Herbert J. Bernstein yaya at bernstein-plus-sons.com
Fri Jul 29 11:53:54 BST 2011


I agree.  On my Mac, the time also drops sharply with pre-allocation and []
instead of push_back.


At 10:51 AM +0200 7/29/11, Jonathan WRIGHT wrote:
>Dear Justin,
>
>Your code counts the time compressing, but not the time writing the
>file, which is much longer for me. As it stands, you might gain a little
>by adding "packed.reserve(size*2)" just before the call to compress (54
>to 38 ms here on vista64, 3.3 Ghz). That falls further (28 ms) if you
>stop using "push_back" and instead allocate something which is
>"certainly" large enough to start with and use packed[p++]=c.
>
>Cheers,
>
>Jon
>
>On 29/07/2011 00:36, Justin Anderson wrote:
>>  Thanks Nicholas.
>>
>>  I only made a couple small changes to Graeme's code. 1: to load an image
>>  from a file and write to file and 2: to pass the data vectors by
>>  reference. The last change seems to have sped things up a little but
>>  it's still taking 110 - 130 ms to compress which is too slow. We are not
>>  as concerned with decompression speed as that will not need to occur in
>>  real-time.
>>
>>  I put on our FTP here:
>>  ftp://ftp.rayonix.com/pub/del_in_30_days/byte_offset.tgz.
>>
>>  Thanks,
>>
>>  Justin
>>
>>  On 7/28/11 2:06 PM, Nicholas Sauter wrote:
>>>  Justin,
>>>
>>>  Just some comments based on our experience...first, I haven't tried the
>>>  compression extensively, just the decompression. But I've found Graeme's
>>>  decompression code to be significantly faster than the CBF library, first
>>>  because it is buffer-based instead of file-based, and also because it
>>>  hard-codes some assumptions about data depth.
>>>
>>>  I'd be happy to examine this in more detail if there is some way to share
>>>  your code example...
>>>
>>>  Nick
>>>
>>>  On Thu, Jul 28, 2011 at 11:46 AM, Justin
>>>  Anderson<justin at rayonix.com>wrote:
>>>
>>>>  Hello all,
>>>>
>>>>  I have run Graeme's byte offset code on a 4k x 4k (2 byte depth)
>>>>  Gaussian
>>>>  noise image and found it to compress the image in around 150 ms (64-bit
>>>>  RHEL, Pentium D 3.46GHz). Using CBF library with byte offset
>>>>  compression, I
>>>>  find the compression takes around 125 ms.
>>>>
>>>>  This will be too slow to keep up with our high speed CCD cameras. We are
>>>>  considering parallelizing the byte offset routine by operating on
>>>>  each line
>>>>  of the image individually. Note that this would mean that a given
>>>>  compressed image would be stored differently than via the whole image
>>>>  algorithm.
>>>>
>>>>  Has anyone been thinking about this already or does anyone have any
>>>>  thoughts?
>>>>
>>>>  Regards,
>>>>
>>>>  Justin
>>>>
>>>>  --
>>>>  Justin Anderson
>>>>  Software Engineer
>>>>  Rayonix, LLC
>>>>  justin at rayonix.com
>>>>  1880 Oak Ave. #120
>>>>  Evanston, IL, USA 60201
>>>>  PH:+1.847.869.1548
>>>>  FX:+1.847.869.1587
>>>>
>>>>
>>>>  _______________________________________________
>>>>  imgcif-l mailing list
>>>>  imgcif-l at iucr.org
>>>>  http://scripts.iucr.org/mailman/listinfo/imgcif-l
>>>>
>>>>
>>>
>>>
>>
>>
>>  _______________________________________________
>>  imgcif-l mailing list
>>  imgcif-l at iucr.org
>>  http://scripts.iucr.org/mailman/listinfo/imgcif-l
>_______________________________________________
>imgcif-l mailing list
>imgcif-l at iucr.org
>http://scripts.iucr.org/mailman/listinfo/imgcif-l


-- 
=====================================================
  Herbert J. Bernstein, Professor of Computer Science
    Dowling College, Kramer Science Center, KSC 121
         Idle Hour Blvd, Oakdale, NY, 11769

                  +1-631-244-3035
                  yaya at dowling.edu
=====================================================


More information about the imgcif-l mailing list