My beloved Johan Öfverstedt is defending his thesis on February 25th, 2022. At 13:00 in Uppsala University, Sweden (see other world times here). Here’s some handy info for those interested in the event, I also wanted to summarize it in this post.
New Ångström. Heinz-Otto Kreiss lecture hall, first floor
Through my PhD I had the wonderful opportunity of traveling to different countries to learn and to show my work. It was a very exciting time and I put all my skills in design to the test. In this post I want to share with you the posters I did and talk a little bit about them.
Exploratory analysis and visualization of in-situ sequencing data
Learning from the interaction of gene expression, protein expression and tissue morphology, to make better decision about cancer treatment
In the group where I worked we are very interested in combining different sources of information and use spatial information to analyze and visualize biological phenomena. Presented at a Deep Learning workshop at the Center for Systems Biology in Dresden.
Quality Assurance and Local Regions for Whole Slide Image Registration
When trying to align WSI it is hard to evaluate and also to find out relevant locations. This poster shows how I approach this problem and gives some fun facts about WSI. I presented it in the European Congress of Digital Pathology.
The day is May 12th 2021. Location: The magnificent Universitetshuset or main university building at Uppsala University, a place where Nobel price winners and nominees gather to have dinner and discuss.
Due to the world situation with COVID, only 8 people are allowed in the auditorium, but it’s enough to make me feel happy and supported.
The speaker stand is adorned by a beautiful golden emblem of the university and this time it was my time to speak. I feel important!
Here is a video of my presentation. If you want to read the thesis you can find it here.
The whole event lasted some 5 hours, but I can only share publicly my part of the presentation.
I received very good and detailed questions and debate about my work which is always encouraging so I am very thankful to the opponent and the committee.
After a closed door deliberation, committee member, Prof. Anna Kreshuk announced that they were pleased with my answers and that they agreed I should be granted my title of Teknologie doktor (Doctor of Philosophy in English).
My supervisor Carolina Wählby had many surprises for me! Along with my boyfriend and friends and colleagues I received much love, many gifts and flowers and we enjoyed a fun moment playing a quiz about me! It was really funny and had many tricky questions!
Summer started the very same day and in the end a few of us had a corona-safe picnic right in front of Universitetshuset. It was a perfect day! Quite the fairy tale finish!
One of the gifts included a driving lesson gift card, so I guess that now I really have to learn. So many adventures to come! I look forward to whatever comes my way!
Our paper was accepted at IEEE journal of Biomedical Health and Informatics. I personally learned a lot and while the reviews were tough they were much appreciated.
With people focusing so much on learning methods and forgetting the classical methods which are really the base for knowledge, I’d like to talk a little bit about the paper and the one of a kind registration method developed in the MIDA group in Sweden.
Images of a single sample can be taken in different modalities, or the same modality but at different times and conditions. Multiple views of a sample can contribute to additional information and they need to be brought to the same spatial frame of reference. The process of aligning images is called image registration.
The transformation can have various degrees of freedom. The simplest one is called rigid, when it only requires translation, rotation and isotropic scaling (same scaling in all dimensions), such transformations preserve distances within the image and preserve parallel lines. When more distortion is required such as shear, the transformation is called affine, it preserves parallel lines but does not necessarily preserve distances. When the deformation goes in different directions and magnitudes along the image the transformation is called deformable/elastic/non-rigid.
Image registration is expressed an optimization problem that is solved by iteratively searching for the parameters of a transformation that transforms an image (moving) into the reference space of image (fixed). The best alignment is decided based on a distance measure between the reference image and the moving image. Registration can then be defined as:
Image registration can be feature based or intensity based. Feature based means that several matching points have to be found in the images and then a transformation that is able to minimize the distance between these points. Intensity based methods use the pixel intensities to guide the registration. There are a few kinds that include both features and intensities, such as Alpha AMD which is used in our paper to find affine transformations between cores in a TMA.
Types of transformations
In order to find the co-expression between two proteins coming from two different consecutives slides I had to register the cores. To do this I used Alpha AMD which is able to use both intensity and spatial information to find the best possible affine transformation between the cores.
Why not deformable you ask? well deformable has a considerable higher number of parameters, it has less control and since the two slides are actually two different pieces of tissue they should not necessarily match perfectly or we would face the same problem as 3D tissue reconstruction, the bananna effect. Additionally, affine has the benefit of overlooking big folds or rips.
If you don’t care about the explanation and want to see the parameters for aligning tissue skip to the next section.
Alpha AMD quantizes the image and gradually aligns the cumulative sum of each level, this is on of the nifty tricks to combine spatial and intensity information in one go. It also does this in levels, in a pyramidal scheme.
Let’s see a toy example to understand how it works and what parameters to choose.
Imagine we have these two images to register. Notice that they are grayscale and have a gradient.
The levels in these gradients can be quantized in as many levels as we want, let’s see how 5 of them look in this gif showing the histogram of intensities.
Then using different levels in a resolution pyramid and each the cumulative sum of each quantization level we are basically using all the following information:
Parameters for aligning tissue
Since I had images coming from different slides, I used the unmixed H stains and DAB stains to convert the core to a grayscale version that did not have differences in intensities and just shows me if a pixel has tissue or not.
Then taking those grayscale representations of the core I use Alpha AMD to find the affine matrix that I can use to align the DAB images and like that find the coexpression. The video abstract in the explains further.
To get the results depicted below my parameters for Alpha AMD are: