a little bit of everything

Category: Paper

Posters in Image Analysis for Microscopy

Through my PhD I had the wonderful opportunity of traveling to different countries to learn and to show my work. It was a very exciting time and I put all my skills in design to the test. In this post I want to share with you the posters I did and talk a little bit about them.

I do all my posters and figures in Inkscape, Blender and GIMP, all free and open source software for design that have all the same tools as private paid software. I also program a lot of the SVG figures using javascript and D3,

Exploratory analysis and visualization of in-situ sequencing data

TissUUmaps a tool to explore millions of points of in-situ sequencing data directly on top of the tissue. Offline or online, with documentation, you can use the browser console to analyze using javascript or simply use the GUI. Presented in EMBL – Seeing is believing as a lightning talk and in Janelia’s Women in Computational Biology conference in the U.S.

Learning from the interaction of gene expression, protein expression and tissue morphology, to make better decision about cancer treatment

In the group where I worked we are very interested in combining different sources of information and use spatial information to analyze and visualize biological phenomena. Presented at a Deep Learning workshop at the Center for Systems Biology in Dresden.

Quality Assurance and Local Regions for Whole Slide Image Registration

When trying to align WSI it is hard to evaluate and also to find out relevant locations. This poster shows how I approach this problem and gives some fun facts about WSI. I presented it in the European Congress of Digital Pathology.

Memories of the thesis defense day

The day is May 12th 2021. Location: The magnificent Universitetshuset or main university building at Uppsala University, a place where Nobel price winners and nominees gather to have dinner and discuss.

Universitetshuset – Image by University Press

Due to the world situation with COVID, only 8 people are allowed in the auditorium, but it’s enough to make me feel happy and supported.

The speaker stand is adorned by a beautiful golden emblem of the university and this time it was my time to speak. I feel important!

Here is a video of my presentation. If you want to read the thesis you can find it here.

Image Processing, Machine Learning and Visualization for Tissue Analysis – Pop science presentation at Sal IX, Universitetshuset, Uppsala University

The whole event lasted some 5 hours, but I can only share publicly my part of the presentation.

I received very good and detailed questions and debate about my work which is always encouraging so I am very thankful to the opponent and the committee.

After a closed door deliberation, committee member, Prof. Anna Kreshuk announced that they were pleased with my answers and that they agreed I should be granted my title of Teknologie doktor (Doctor of Philosophy in English).

My supervisor Carolina Wählby had many surprises for me! Along with my boyfriend and friends and colleagues I received much love, many gifts and flowers and we enjoyed a fun moment playing a quiz about me! It was really funny and had many tricky questions!

Summer started the very same day and in the end a few of us had a corona-safe picnic right in front of Universitetshuset. It was a perfect day! Quite the fairy tale finish!

One of the gifts included a driving lesson gift card, so I guess that now I really have to learn. So many adventures to come! I look forward to whatever comes my way!

Image Processing, Machine Learning and Visualization for Tissue Analysis

Digital Comprehensive Summaries of Uppsala Dissertations
from the Faculty of Science and Technology 2025

Main document

Image Processing, Machine Learning and Visualization for Tissue Analysis

See the version in Uppsala University’s DiVA portal

Accompanying papers

These are the papers where I am first author. They are published under an open access license with the exception of paper III.

Paper I

TissUUmaps: interactive visualization of large-scale spatial gene expression and tissue morphology data

At: Bioinformatics – Oxford University Press

Paper II

Towards Automatic Protein Co-Expression Quantification in Immunohistochemical TMA Slides

At: IEEE – Journal of Biomedical and Health Informatics

Paper III

Whole Slide Image Registration for the Study of Tumor Heterogeneity

At: MICCAI 2018 – Digital Pathology workshop

Paper IV

Machine learning for cell classification and neighborhood analysis in glioma tissue. (Under review)

At: BioRxiv preprint please note that this paper is being revised by us now and has improved greatly. We will make it available as soon as it is finished. you can find the revised version now.

Additional papers

These are publications where I also took part on as a supporting role.

Automated identification of the mouse brain’s spatial compartments from in situ sequencing data

At: BMC Biology

Artificial intelligence for diagnosis and grading of prostate cancer in biopsies: a population-based, diagnostic study

At: The Lancet – Oncology

Deep learning in image cytometry: a review

At: Cytometry part A

Transcriptome-Supervised Classification of Tissue Morphology UsingDeep Learning

At: IEEE International Symposium on Biomedical Imaging 2020

Decoding Gene Expression in 2D and 3D

At: Scandinavian Conference on Image Analysis 2017

You can see a full list of publications in my Google Scholar profile

Registration of WSI and TMA

Our paper was accepted at IEEE journal of Biomedical Health and Informatics. I personally learned a lot and while the reviews were tough they were much appreciated.

With people focusing so much on learning methods and forgetting the classical methods which are really the base for knowledge, I’d like to talk a little bit about the paper and the one of a kind registration method developed in the MIDA group in Sweden.

Basics

If you don’t want to read the basics skip to next section

Images of a single sample can be taken in different modalities, or the same modality but at different times and conditions. Multiple views of a sample can contribute to additional information and they need to be brought to the same spatial frame of reference. The process of aligning images is called image registration.

The transformation can have various degrees of freedom. The simplest one is called rigid, when it only requires translation, rotation and isotropic scaling (same scaling in all dimensions), such transformations preserve distances within the image and preserve parallel lines. When more distortion is required such as shear, the transformation is called affine, it preserves parallel lines but does not necessarily preserve distances. When the deformation goes in different directions and magnitudes along the image the transformation is called deformable/elastic/non-rigid.

Image registration is expressed an optimization problem that is solved by iteratively searching for the parameters of a transformation that transforms an image (moving) into the reference space of image (fixed). The best alignment is decided based on a distance measure between the reference image and the moving image. Registration can then be defined as:

Image registration can be feature based or intensity based. Feature based means that several matching points have to be found in the images and then a transformation that is able to minimize the distance between these points. Intensity based methods use the pixel intensities to guide the registration. There are a few kinds that include both features and intensities, such as Alpha AMD which is used in our paper to find affine transformations between cores in a TMA.

Types of transformations

Different kinds of transformations.
Do not use figure without my permission

In order to find the co-expression between two proteins coming from two different consecutives slides I had to register the cores. To do this I used Alpha AMD which is able to use both intensity and spatial information to find the best possible affine transformation between the cores.

Why not deformable you ask? well deformable has a considerable higher number of parameters, it has less control and since the two slides are actually two different pieces of tissue they should not necessarily match perfectly or we would face the same problem as 3D tissue reconstruction, the bananna effect. Additionally, affine has the benefit of overlooking big folds or rips.

How does Alpha AMD work?

If you don’t care about the explanation and want to see the parameters for aligning tissue skip to the next section.

Alpha AMD quantizes the image and gradually aligns the cumulative sum of each level, this is on of the nifty tricks to combine spatial and intensity information in one go. It also does this in levels, in a pyramidal scheme.

Let’s see a toy example to understand how it works and what parameters to choose.

Imagine we have these two images to register. Notice that they are grayscale and have a gradient.

The levels in these gradients can be quantized in as many levels as we want, let’s see how 5 of them look in this gif showing the histogram of intensities.

Pixel intensities seen as a heightmap. Quantized level, histogram section in level.

Then using different levels in a resolution pyramid and each the cumulative sum of each quantization level we are basically using all the following information:

Parameters for aligning tissue

Since I had images coming from different slides, I used the unmixed H stains and DAB stains to convert the core to a grayscale version that did not have differences in intensities and just shows me if a pixel has tissue or not.

Then taking those grayscale representations of the core I use Alpha AMD to find the affine matrix that I can use to align the DAB images and like that find the coexpression. The video abstract in the explains further.

To get the results depicted below my parameters for Alpha AMD are:

alpha_levels=7
plevels=[128,64,32,16,8,4]
sigmas=[60,30,15.0,8.0,4.0,2.0]
symmetric_measure = False
squared_measure = False
param_iterations = 200
param_sampling_fraction = 0.4

My images are around 10,000 x 10,000 pixels wide

Unmixing then using H to find the transformation T
Final overlapping DAB for each protein

Want to know more? contact me or invite me to coffee.

Powered by WordPress & Theme by Anders Norén