[Imgcif-l] High speed image compression

Nicholas Sauter nksauter at lbl.gov
Thu Jul 28 20:06:28 BST 2011


Justin,

Just some comments based on our experience...first, I haven't tried the
compression extensively, just the decompression.  But I've found Graeme's
decompression code to be significantly faster than the CBF library, first
because it is buffer-based instead of file-based, and also because it
hard-codes some assumptions about data depth.

I'd be happy to examine this in more detail if there is some way to share
your code example...

Nick

On Thu, Jul 28, 2011 at 11:46 AM, Justin Anderson <justin at rayonix.com>wrote:

> Hello all,
>
> I have run Graeme's byte offset code on a 4k x 4k (2 byte depth) Gaussian
> noise image and found it to compress the image in around 150 ms (64-bit
> RHEL, Pentium D 3.46GHz).  Using CBF library with byte offset compression, I
> find the compression takes around 125 ms.
>
> This will be too slow to keep up with our high speed CCD cameras.  We are
> considering parallelizing the byte offset routine by operating on each line
> of the image individually.  Note that this would mean that a given
> compressed image would be stored differently than via the whole image
> algorithm.
>
> Has anyone been thinking about this already or does anyone have any
> thoughts?
>
> Regards,
>
> Justin
>
> --
> Justin Anderson
> Software Engineer
> Rayonix, LLC
> justin at rayonix.com
> 1880 Oak Ave. #120
> Evanston, IL, USA 60201
> PH:+1.847.869.1548
> FX:+1.847.869.1587
>
>
> _______________________________________________
> imgcif-l mailing list
> imgcif-l at iucr.org
> http://scripts.iucr.org/mailman/listinfo/imgcif-l
>
>


-- 
Nicholas K. Sauter, Ph. D.
Computer Staff Scientist/Engineer
Physical BioSciences Division
Lawrence Berkeley National Laboratory
1 Cyclotron Rd., Bldg. 64R0121
Berkeley, CA 94720-8118
(510) 486-5713


More information about the imgcif-l mailing list