Our paper was accepted at IEEE journal of Biomedical Health and Informatics. I personally learned a lot and while the reviews were tough they were much appreciated.

With people focusing so much on learning methods and forgetting the classical methods which are really the base for knowledge, I’d like to talk a little bit about the paper and the one of a kind registration method developed in the MIDA group in Sweden.


If you don’t want to read the basics skip to next section

Images of a single sample can be taken in different modalities, or the same modality but at different times and conditions. Multiple views of a sample can contribute to additional information and they need to be brought to the same spatial frame of reference. The process of aligning images is called image registration.

The transformation can have various degrees of freedom. The simplest one is called rigid, when it only requires translation, rotation and isotropic scaling (same scaling in all dimensions), such transformations preserve distances within the image and preserve parallel lines. When more distortion is required such as shear, the transformation is called affine, it preserves parallel lines but does not necessarily preserve distances. When the deformation goes in different directions and magnitudes along the image the transformation is called deformable/elastic/non-rigid.

Image registration is expressed an optimization problem that is solved by iteratively searching for the parameters of a transformation that transforms an image (moving) into the reference space of image (fixed). The best alignment is decided based on a distance measure between the reference image and the moving image. Registration can then be defined as:

Image registration can be feature based or intensity based. Feature based means that several matching points have to be found in the images and then a transformation that is able to minimize the distance between these points. Intensity based methods use the pixel intensities to guide the registration. There are a few kinds that include both features and intensities, such as Alpha AMD which is used in our paper to find affine transformations between cores in a TMA.

Types of transformations

Different kinds of transformations.
Do not use figure without my permission

In order to find the co-expression between two proteins coming from two different consecutives slides I had to register the cores. To do this I used Alpha AMD which is able to use both intensity and spatial information to find the best possible affine transformation between the cores.

Why not deformable you ask? well deformable has a considerable higher number of parameters, it has less control and since the two slides are actually two different pieces of tissue they should not necessarily match perfectly or we would face the same problem as 3D tissue reconstruction, the bananna effect. Additionally, affine has the benefit of overlooking big folds or rips.

How does Alpha AMD work?

If you don’t care about the explanation and want to see the parameters for aligning tissue skip to the next section.

Alpha AMD quantizes the image and gradually aligns the cumulative sum of each level, this is on of the nifty tricks to combine spatial and intensity information in one go. It also does this in levels, in a pyramidal scheme.

Let’s see a toy example to understand how it works and what parameters to choose.

Imagine we have these two images to register. Notice that they are grayscale and have a gradient.

The levels in these gradients can be quantized in as many levels as we want, let’s see how 5 of them look in this gif showing the histogram of intensities.

Pixel intensities seen as a heightmap. Quantized level, histogram section in level.

Then using different levels in a resolution pyramid and each the cumulative sum of each quantization level we are basically using all the following information:

Parameters for aligning tissue

Since I had images coming from different slides, I used the unmixed H stains and DAB stains to convert the core to a grayscale version that did not have differences in intensities and just shows me if a pixel has tissue or not.

Then taking those grayscale representations of the core I use Alpha AMD to find the affine matrix that I can use to align the DAB images and like that find the coexpression. The video abstract in the explains further.

To get the results depicted below my parameters for Alpha AMD are:

symmetric_measure = False
squared_measure = False
param_iterations = 200
param_sampling_fraction = 0.4

My images are around 10,000 x 10,000 pixels wide

Unmixing then using H to find the transformation T
Final overlapping DAB for each protein

Want to know more? contact me or invite me to coffee.