[Imgcif-l] High speed image compression

Nicholas Sauter nksauter at lbl.gov
Fri Jul 29 05:41:37 BST 2011


Justin,

I'm also getting disappointing results with the code you sent.

Compiled with g++ -O3, on 64-bit Fedora 8 Intel Xeon 2.93 GHz, I'm getting a
compress time of 80-90 ms.  What type of throughput are you aiming for...<
40 ms?

I'll continue testing a bit more tomorrow...

Nick

On Thu, Jul 28, 2011 at 3:36 PM, Justin Anderson <justin at rayonix.com> wrote:

> Thanks Nicholas.
>
> I only made a couple small changes to Graeme's code.  1: to load an image
> from a file and write to file and 2: to pass the data vectors by reference.
>  The last change seems to have sped things up a little but it's still taking
> 110 - 130 ms to compress which is too slow.  We are not as concerned with
> decompression speed as that will not need to occur in real-time.
>
> I put on our FTP here: ftp://ftp.rayonix.com/pub/del_**
> in_30_days/byte_offset.tgz<ftp://ftp.rayonix.com/pub/del_in_30_days/byte_offset.tgz>
> .
>
> Thanks,
>
> Justin
>
>
> On 7/28/11 2:06 PM, Nicholas Sauter wrote:
>
>> Justin,
>>
>> Just some comments based on our experience...first, I haven't tried the
>> compression extensively, just the decompression.  But I've found Graeme's
>> decompression code to be significantly faster than the CBF library, first
>> because it is buffer-based instead of file-based, and also because it
>> hard-codes some assumptions about data depth.
>>
>> I'd be happy to examine this in more detail if there is some way to share
>> your code example...
>>
>> Nick
>>
>> On Thu, Jul 28, 2011 at 11:46 AM, Justin Anderson<justin at rayonix.com>**
>> wrote:
>>
>>  Hello all,
>>>
>>> I have run Graeme's byte offset code on a 4k x 4k (2 byte depth) Gaussian
>>> noise image and found it to compress the image in around 150 ms (64-bit
>>> RHEL, Pentium D 3.46GHz).  Using CBF library with byte offset
>>> compression, I
>>> find the compression takes around 125 ms.
>>>
>>> This will be too slow to keep up with our high speed CCD cameras.  We are
>>> considering parallelizing the byte offset routine by operating on each
>>> line
>>> of the image individually.  Note that this would mean that a given
>>> compressed image would be stored differently than via the whole image
>>> algorithm.
>>>
>>> Has anyone been thinking about this already or does anyone have any
>>> thoughts?
>>>
>>> Regards,
>>>
>>> Justin
>>>
>>> --
>>> Justin Anderson
>>> Software Engineer
>>> Rayonix, LLC
>>> justin at rayonix.com
>>> 1880 Oak Ave. #120
>>> Evanston, IL, USA 60201
>>> PH:+1.847.869.1548
>>> FX:+1.847.869.1587
>>>
>>>
>>> ______________________________**_________________
>>> imgcif-l mailing list
>>> imgcif-l at iucr.org
>>> http://scripts.iucr.org/**mailman/listinfo/imgcif-l<http://scripts.iucr.org/mailman/listinfo/imgcif-l>
>>>
>>>
>>>
>>
>>
> _______________________________________________
> imgcif-l mailing list
> imgcif-l at iucr.org
> http://scripts.iucr.org/mailman/listinfo/imgcif-l
>
>


-- 
Nicholas K. Sauter, Ph. D.
Computer Staff Scientist/Engineer
Physical BioSciences Division
Lawrence Berkeley National Laboratory
1 Cyclotron Rd., Bldg. 64R0121
Berkeley, CA 94720-8118
(510) 486-5713


More information about the imgcif-l mailing list