This is the first lecture of the course Medical Image Analysis at UCPH.
Reference: All the material is from the course Medical Image Analysis taught by UCPH.
What we look at today is especially X-ray and CT, and what I want you to go away from this class with, after today’s, I want you to know what’s the difference between gamma and X-rays? I want you to be able to explain how X-ray image is formed? I want you to acturally write it down roughly the formula for the Radon transform, and then, draw a kind of diagram describing, how does filtered back projection work? So, what’s the basic of that algorithm or back projection in general? And then explain how filtered back projection works. So that’s basically the things we look at today, and we’ll go through the basic physics first.
So, what kind of medical imaging modalities do we actually have access to? So the way or one way of sorting medical imaging modalities is by the type of energy that is used to make the image.
And here we have external energies. In this case, examples are X-rays or CT where you shoot X-rays at a person so something external going to you or through you. Then, we have internal where we have SPECTs and PETs. There we actually inject something radioactive into you, it goes everywhere we want go and then kind of decomposes with in you. And then, the positron and electron shine out of you and we measured it. So, that’s kind of going from inside out of you as we called internal. Then, we also have the combination of external and internal. That’s MRI. So, an MRI will go into principles a bit more next week. MRI, I actually use some radio waves from the outside to stimulate something on the inside, and then, I look at the radio waves that the hydrogen atoms in my body emit back. So, it’s an interplay between external and internal.
So, in order to go deep or deep into, actually we need to know what does it even mean to use X-ray inwards this. So, we’ll start with the very basics.
What is light made of?
So we have this particular wave totality so where you can describe this either as a photons those are the particle view or electromagnetic radiation, which the wave view.
What categories of light do you know of?
We can either sort it by wavelengths or frequency. In the very end, we have radio waves and this is quite long $10^3$. This means the length for one sign is very very long. If we go on, then, we have TV waves, microwaves then we’re done already maicrowaves that’s $10^{-2}$.
Then, we go all the way done to X-rays and gamma waves at the order of $10^{-10}$ , $10^{-10}$ is an angstrom that’s about in the order of magnitude of the size of an atom.
And this is how we can look at different lights.
Radiation
We need to get to know three different definitions. Now we know what light is, light is photons or waves depends on how we’re dealing with it. And now we need to know three words.
The first word is Radiation. Radiation means that any emisson of energy as electromagnetic waves, aka photons, or as moving subatomic particles, aka alpha particles, electrons or positrons. So if you hit atomic nucleus very hard, and part of the nucleus goes off, that could also be considered radiation.
辐射在物理学上指的是能量以波或是次原子粒子移动的型态,在真空或介质中传送。
Latest version below
Emssion of energy from a source through space or a material medium.
Described as wave and as particle-like units called photons
Ionizing radiation
Radiation with sufficient energy to cause ionization in the medium through which it passes.
足够能量的辐射在其通过的介质中产生电离称为电离辐射。
Ionization
Process in which an atom or a molecue acquires a negative or positive charge by gaining or losing electrons to form ions.
In other words, we modify an atom and remove either negative or positive charge or added it and this way we change the neutrality of the atom to becoming charged. So ionization means we’re basically making a charge particle.
Radiation | Energy | Mass | Charged | Examples |
---|---|---|---|---|
Electromagnetic | √ | ⨯ | ⨯ | - X-rays - gamma radiation |
Particulate | √ | √ | √⨯ | - beta radiation - alpha radiation - neutrons |
Electromagnetic (EM) wave
Consisting of oscillating electric (E) and magnetic (B) waves
E and B are perpendicular to each other and to the direction of propagation
EM radiation is characterized by wavelength ($\lambda$), frequency (f), and energy per photon (E)
The energy of a photon
\[E = hf = \frac{hc}{\lambda}\]Where $h$ is Plank’s constant, $c$ is speed of the light, then $\implies$ $hc = 1.2397 \times 10^{-6} eV$
How is radiation produced?
Produced in one of two ways
Interaction of a particle with matter 粒子与物质的相互作用
Radioactive decay of an unstable atom 不稳定原子的放射性衰变
Types of radioactive decay
$\implies$ So, what happens when we use radiation and shoot it at other stuff? $\implies$ We get radioactive decays. 辐射衰减
One of the things that you can look at is different types of radioactive decays and they come kind of in different sizes, and with different energies.
Alpha
The first one is Alpha - alpha particle. That’s in principle a helium nucleus. $4-8$ MeV (a larger unit such as megaelectron volt (MeV), which is equal to one million electron volts). It has a very big size and therefore it has a very low penetration. So this means a piece of paper can already stop it. So this is something you don’t really need to worry about walking around in the hospital.
Beta
Then, we have beta decay. In a beta decay, we have a process happening where an electron or a positron gets emitted. Few MeV. Here it has a little bit lower penetration. But then that electron or positron gets emitted can, for example, annihilate with other electron or positron on and then you get some rays out and we’ll go into this next week about how that happens. And this is how we then form a PET image. We use this for PET.
Gamma
Then, we have gamma rays that are the by-product of radioactive decay. Those are a lot smaller ($100-200$ keV) and higher energy, and therefore they have higher penetration. We use this for example, for SPECT.
Transfer of energy from emitted radiation to matter
从发射辐射到物质的能量转移
Depending on its energy and ability to penetrate matter a radiation can be
Ionizing radiation
Excitation (non-ionizing) radiation
What’s the connection between X-rays and gamma rays? Actually, it’s only a question of definition.
Basically, they are both the same type of thing. They are radio waves or light at the very high frequencies. You can either distinguish them on the basis of their wavelength. Typically, we put the X-rays first and then the gamma rays higher up. This is only possible if we know actually what our wavelength is or our energy. The other way we can distinguish them is actually how they are formed. So, where X-rays are usually emitted by electrons and the gamma rays are emitted by atomic nucleus. So this is more decay process like we use in SPECT.
1. distinguish on the basis of wavelength with radiation between $10^{-9}$ to $10^{-11}$ defined as X-rays and $10^{-11}$ to $10^{-13}$ defined as gamma rays. $\implies$ only possible if the wavelength is known!
2. distinguish between the two types of radiation based on their source: X-rays are emitted by electrons, while gamma rays are emitted by the atomic nucleus.
Question: Compare the photon energy of X-ray and gamma-rays
X-ray tube diagram / X 射线管图
How does it work in the clinic?
Basically, what we have is we have a little light bulb, or at least, like a light bulb. But the way it works, we have a cathode (阴极) and anode (阳极).
What we do here is we have a little thing that we heat it up so we have electrons here with high energy that emit electrons that are shot at my anode where I have a more heavier atoms and then, the X-rays shoot outside. That’s basically how it’s produced.
And of course, we need to learn enough energy to hit (or heat?) these up to give the electrons here enough power. So one of the things you would need to know when something goes wrong. Okay, is there even power to the system and so on.
How are the X-rays produced?
You go into the hospital and you’re looking at the X-ray machine, you’re the one with the technical background coming from the university and the doctor says it doesn’t work, I don’t know what’s wrong. You need to have a basic understanding at least of what could be wrong.
So, first of all, the machine itself needs to produce the X-rays. So for example, if you didn’t see there’s a power outage in the whole building. Will there be one reason? You couldn’t even turn the little light bulb on that produces the X-rays, right ? $\implies$ So the way that X-rays are made is basically you have an interaction of very high speed electrons with a bigger atom, and the incident electron comes, hits, and electron here that is ejected and then stays at its place. But because this one came with more energy, it also releases a photon. So what you can see is if I have a moving electron and I hit something (another electron which have the same weight essentially), then I bounced that electron out like billard ball but because I had these momentum here, that is transfered into energy out. And that’s basically call Bremsstrahlung!
Bullet Points for Above
X-rays are generated as the result of interactions of high-speed electrons with heavy target atoms
Bremsstrahlung: A continous X-ray spectrum
an accelerated electron loses energy in interaction with an atom and the loss of energy emits X-ray photons in a scattered direction (Characteristic radiation)
Radiography images
2D images
Diagnose of structural damage, abnormalities,…
Interaction X-ray beam with tissue
X-ray image reconstruction
And how does X-ray work? Well basically, it’s just taking a picture.
Whereas when I take a regular photograph, what I do is following. I have my CCD chip and the visible light from the sun, or another light is hitting you, it reflects of a few and it goes onto my CCD chip. That’s how I get a picture of you with visible light. I see your reflection of light from whatever it comes on my chip.
X-ray is very similar. This is my CCD chip and up here, instead of the light bouncing off of you. I shot the X-rays through you, and then, I take an image of it. If you look at the way that this works, it’s also very similar to how in the old days photographs were made. Now a lot of the X-ray systems are also digital, but principle in the old days was the same. You have some layer of silver atoms, then you shoot the X-rays in it and it changes the state and then you can make this chemically disorve and you see the image. So very similar like a real photograph used to be developed on film.
And then we get this 2D projection image, for example, the abdomen is here. Whereas when I look at you, when I take a regular photograph and I look at you, I see your outside. I see the light bouncing onto you and bouncing back into my chips. So get a picture of you outside.
Here I now get a picture by going through you. What’s the difference between the things that are light and dark in the image? $\implies$ Density
Exactly, because this is bone. So I’ve shot some particles through you, and the things that are more dense, they basically appear brighter. So what this means is that I have a difference in how many particles get stuck basically on the way out and how many particles go all the way through. So you can think a little bit about this like a negative image in the old days.
Resolution
Contrast the intensity difference in adjacent regions of the image
Attenuation coefficient is exactly how many particles get stuck in you.
If I have differences that are large, like bone and air, I get bright contrast. If I want to have two soft tissues, let’s say, muscle and fat, then, it’s harder, you almost cannot see it. So it’s always good to have something very dense and something not so dense, if I want to see.
This is the reason why when you break a bone, the image modalities you go to is X-ray. If you have a concussion, no, they don’t necessarily. I mean unless they want to check if they’ve broken bones in the skull, they wouldn’t take X-ray of your brain.
And you can actually change the attenuation coefficient a little bit by injecting or orally adminster something as a solution.
Signal-to-noise ratio (SNR)
SNR depends on how long we expose. If I keep imaging to you for half an hour, I will get a lot better picture. If I just image you for 2 seconds but maybe you’ve moved, so that’s bad. If I want to take an image, and also, I probably cannot get away with the radiation exposure associated with 30 minutes of imaging you.
And SNR also depends on the type of the source and detector. With all technical scanners, we always have more baseline or more clinical research scanners, so there are different X-ray and itself not often used in pure research settings. But the other one stays in the CT or especially MR, and PET are much more use in research settings and you generally have better equipment because you imaging less people and you’re looking after tiny changes.
3D detailed information of the structures inside the body
Diagnose bone disorders, damage to internal organs, cancer , …
X-ray Computed Tomography (CT) is an imaging modality that produces cross-sectional images of the body.
X-rays produced by an X-ray tube, attenuated by the patient, and measured by an X-ray detector
So, X-ray was easy right? I know how X-ray works and so what about CT? It is a little bit advanced but actually not really. Because the basis of CT is that you have many X-rays. You could just take X-rays from a lot of different angles and this is basically how the first type of generation of scanners were where we basically have some kind of detector that was around and the source that is kind of move, and translate.
Nowadays even, we have a rotating source with a full ring of detectors, and usually, it’s even a spiral pattern it chooses to use. How does they look like?
So these are some old examples. And this is how our CT scanner is gonna work. And basically, you have this ring, you have the bed, and then, you put the patient that you want to image.
Here we have a CT scan of abdomen and here the same thing applies. As in the X-rays. look at these dense things, these are the bones and this is a spine back of the ribs. These are the arms, shoulder blades. So it’s right through here. So what is denser is going to be brighter.
So, when we just take a single CT, if we take one image from different angles, what we do is we wish we could get one single slice. So, yes, you get a little bit better image in terms of X-ray because X-ray is just the projection, but it would only be a single slice.
What we often to do is to actually get a whole 3D volume of something to not only look at one single slice, but at a whole volume. And there, we then can get by doing either multiple slice CT. This could be the thing with X-ray machine 10 times next to each other and then I could be able to get 10 slices, means also 10 times radiation exposure than one slice.
So, to get around this, what we can do is what we call Spiral CT. So, instead of just taking in a single slice, let’s say five or six or it’s usually more X-ray images around me and doing that in a stack where I do it for 10 slices. What you can do is you can basically take an X-ray image here, and then spiral a little bit, so this means, move my source a little bit but also move it down, take another image. So, instead of taking a lot X-ray images around me and downwards, I can try to make a spiral around myself and then, I can also reconstruct the 3D volume.
And the nice thing is that we have a lower radiation dose and it’s a faster scan (within 1 second) than doing the full blown thing. But of course, we get a little less image resolution.
Single-slice CT
Detector CT yields a single slice (2D) and via movement along the z direction one arrives at a 3D volume.
Multi-slice CT
Can acquire multiple 2D slices at the same time, hence creating a 3D volume
Spiral CT (often already multi-slice):
Same restrictions as for X-ray imaging
Additional impact on resolution & SNR:
The restrictions that apply before to X-ray are most of them the same for CT because it is essentially the same thing, just we are doing multiple of them. But additional impact here is correlated with how many projections do we do?
So, where we in X-ray, we just take one projection, now we do a lot of them, and of course, the more we do, we have better resolution, but also the longer time it takes. Also, how many detectors do we have and what’s the resolution of our detectors. And we can always have to have in mind, we can of course get better resolution, but you have to be mindful of the radiation exposure but also of the scan time. For example, I image something lifeless, I could basically image it until I break it, and that would increase the resolution but this also increase my scan time which humans always radiation exposure obviously comes first. So there’s an upper limit to how much you can actually expose people to.
And this is basically for X-ray and CT. So it’s very very basic and very related to what you know that how camera works, but what I’ve jumped over and what is in the details in this so I get that CT does a lot of images around something and how does it make one single image out of it and that’s what we’ll work with. We will go into Math of how CT utilities those multiple X-rays to reconstruct the CT images.
Advantages of spiral CT v.s. circular CT
Imagine that I am the patient lying down. If I just take single-slice CT, I would then stay in one level, and I would take one CT image. One CT image is consisted of several X-ray images from different angles. This would give me one slice. So this means I still have a 2D image, so you would see an image similar to the image that I showed you before through abdomen. Now, if I want to have a volume, instead of only single slice, what I would need to do in between here, I would have 10 sources, and I would with each of them take a lot of images all the way around. Now, what that means is I would have 10 times as much as exposure as having only single slice. What I can do differently is that, if I don’t have 10 sources next to each other and I just rotate it all the way around. What if I try to interpolate a little bit? I move downwards as I take my angular projections, and then, I can also reconstruct a 3D volume.
The resolution is slightly different because you will have a lot more resolution if you have 10 of them, and you do the same number of projections, as if you go all the way around, and moving downwards as you go. So that’s difference between it.
Why are we even look at Radon Transform? The reason for this is that we’re interested in the image reconstruction.
CT is based on a bunch of X-rays or X-ray images, or X-ray projections. How does CT image come out of this? And the most common form of this is what we call image reconstruction and it’s still filtered back projection. We have also iterative methods, for example, OSEM method and this depends very much on the type of imaging we do if it’s transmission, or emission image. Transmission is shooting something through like CT, and Emission is something like PET (Positron Emission Tomography).
And it also depends on the scanners’ shape. For example, we have a PET SIEMENS HRRT scanner, High Resolution Research Tomograph, and it’s not circular, it’s octagon. And this means you’ve missing some angles, and then, you cannot do a regular filtered back projection, because you’re missing angles. So, that’s for example, some of the limitations for what reconstruction to use, but the most is still filtered back projection.
So, let’s just look a little bit more mathematically at the Radon transform, which is one of the basis for filtered back projection algorithm.
Let’s say I have an object in X-Y coordinate system. In my coordinate system, I could say, I define a line $p$, and basically, this line $r$ is defined by an angle. And that angle, you can see in the origin.
So I basically define a line $p$, this line $p$ is where I want to do the projection, you can think of it as the plate on which I want to image X-ray image. What I want to do is that I have a source $s$, and I shoot something through, and what I get here is a view of the density. Now, in X-ray, we have the invert, we have a negative, what is more dense and will actually get less exposure at the back. But when you look at the Radon transform, what it means is I basically go through and I do the line integral along each of these lines, and simply sum up the values I see.
If my background is black, and my object is white, so I have intensity of $0$ and $1$. Then, I would simply follow the line as soon as I hit the object, for each pixel I am hitting, I would add $1$, $1$, $1$, $1$, $1$, $1$. And that would give me then a number here and this is what I’m basically the histogram of all my lines. So this is basically an overview of my density in this projection.
When we use this, we often change the coordinate system, and all we do here is we say instead of working with our usual $x$ and $y$, we rotate our coordinate system with the same angle as our protections is made. So, here, my rotated $x$ which I call it $r$ here, is basically always perpendicular to my line integrals, and this is only because then we can make the maths easier.
CAHNGE OF COORDINATE
\[x\cos\theta + y\sin\theta = r \\ -x\sin\theta + y\cos\theta = s\]If the function $P$ that has two parameters, and that’s my line $r$, so that’s the distance from the origin and my angle $\theta$. And basically, I am doing a line integral of my object along all possible lines.
\[P(r, \theta) = \text{R}\{f(x, y) \} = \int_{L}f(x, y)dl\]And then, I reformulate this, I can just do a change of $x$ and $y$ into $r$ and $s$.
\[\int_{L}f(x, y)dl = \int_{-\infty}^{\infty}f(r\cos\theta-s\sin\theta, r\sin\theta+ s\cos\theta)ds\]And now integrating over a line means going from $-\infty$ to $\infty$ in $s$. I do this for all $r$ values and I get my transform.
So now going to how do we use the Radon transform when we do CT.
Basically, what we do is we do a lot of projections, three projections here. That’s equivalent to taking 3 X-ray images. what we can do is the following, as before, we have these results of my line integral, so that’s actually the output of my Radon transform. What I can do is I can say, well, wherever this is high, let’s say, this is the value 5 and the other one is value 10 in my histogram here. I will simply smear the value 5 equally over the length of my field of view. So this means, I count how many pixels I have here and I put the value to every pixel, so this is divided by the number of pixels. So I smeared equally all the way back. I do this from one projection, but then I also do this from the other. If I keep doing this, what will happen is from the different views of the object, I will see an object appearing in the middle.
Back projection is the process of mapping the data from the detector space to the image space in Computed Tomography (CT)
I want to do a quick check before we now move on to what the filtered back projection is from back projection.
I will refer to my true image or my true object as being in the spatial domain, I will refer to after I’ve done a Fourier transform to being in my frequency domain.
In frequency domain, low frequencies are in the middle, and high frequencies are on the outside. Low frequencies corresponds to long-range things in my image. So, for example, background variations. High frequencies correspond to details in the image. That’s how the image composed.
Let’s see how we can use this backprojection to actually try and to make an image.
As before, we have our object $f(x, y)$. And this is our real and image space. And overhere, we are going to do in frequency space. The whole algorithm is just called backprojection works in the following way.
I take the Radon transform of my object. It means I get one output of the line integral so it could just get one line with the histogram on it. And then, I can do a 1D Fourier transform of this. What I then can do is I can say do this again, do this again to this again many times from different angles.
What I then get is an image of Fourier space. Where I before had here, one projection with one angle, I have one line here in my Fourier space. I take another projection, and I get the second line with the second angle in my Fourier space, and so on. So l have done 1D projections, 1D Fourier transform and with several 1D Fourier transforms I filled up my frequency space.
What I can do is add these 1D measurements in my Fourier transform space together, so I can consider all of them at the same time.
I can help myself by doing a change of coordinates. And when I do this, I can go back with the two dimensional inverse Fourier transform and recover my image.
And so this here is basically done by the X-rays for you, right? So shooting the X-rays through gives you this. So this whole process is basic telling you. Once you have to X-rays from different angles, how do I go to get back to 2D image.
Now, I will try to distinguish between filtered back projection and back projection. We will move on from regular back projection to filtered back projection. And in the filtered back projection, there are two tricks.
Before we start, below is the example of blurring in back-projection.
Then, filtered back projection has the following result. $\implies$ Filter out low frequencies to avoid blurring.
Therefore, filtered backprojection yields better result.
There are 2 tricks here, but I will leave it until I figure it out.