Stabilization of Thermal Medical Images based on user-selected area of interest. 

I’ve converted a powerpoint presentation I made in 2006 about the algorithm we developed in the CWU imaging lab for alignment of brain images to HTML. Here it is!

Image Registration

  • Image registration is the process of aligning images in such a way that their features can be related
  • For medical purposes, accurately registering images is essential for proper diagnosis

Uses of Image Registration

  • Characterization of normal vs. abnormal Shape/variation
  • Functional brain mapping/removing shape variation
  • Surgical planning and evaluation
  • Image guided surgery
  • Pre-surgical Simulation

Methods of Image Registration

  • Manual
  • Automatic
      • Rigid
        • Global autocorrelation
        • Affine transform using points
        • Area ratios
    • Non-rigid
      • Mathematical models

Problem

  • We have a sequence of thermal images with local movement, global movement, and temperature changes
  • Movement can be indistinguishable from temperature change
  • The global movement needs to be removed
  • Local movement needs to be preserved

Factors

  • A thermal image is a set of discrete pixels
  • In a sequence of thermal images, each pixel intensity can be defined in terms of the previous slide in the sequence
  • pi,j,k+1= GlobalMotionPixel(pa,b,k) + ThermalImpact(pa,b,k)+ LocalMotion(pa,b,k)

Solvability

  • Global movement can only be solved when local movement and the temperature impact from surrounding pixels is known. Otherwise, the components are indistinguishable by observing the intensity of the destination pixels.

Other sources of information

  • Heartbeat
    • Detectable by looking at changes in blood vessel size
    • Can be used to find the temperature effects of blood flow
  • Ruler
    • Is detectable by using nonlinear color filters
    • Known to not have global movement
  • Bones
    • Easily seen and resistant to both local movement and temperature change

Ruler

Picture1

  • Not visible in grayscale
  • Very low resolution
  • Influence from other pixels heavily distorts ruler’s edge
  • Use of an artificial ruler may help

Bones

Picture2

  • Easy to see in grayscale
  • No local movement
  • Very little temperature change
  • Temperature influence from outside pixels is small

Basics of Autocorrelation

 

Image A
Image A
Image B
Image B
Image A minus Image B
Image A minus Image B
  • Place image A on image B
  • Subtract pixels (or subpixels)
  • Move image B in some direction then do step 2 again
  • The location where image B had the least difference from image A is the position where image B is registered with image A

Problems

  • Autocorrelation of whole images takes too long
  • Correlation of the entire image will remove some desired movement-like effects
  • Some global movement remains

Enhancements to Autocorrelation

  • Let user select most important features using different color mappings to bring out details
  • Do autocorrelation at a subpixel level on the parts that the user selected rather than the whole image
Before
Unfiltered image with no selections
Same image as before showing selected regions with various filters applied
Same image as before showing selected regions with various filters applied

Advantages

  • Letting user chose the most stable areas minimizes the chance of the desired movement being removed
  • Much faster than autocorrelation
  • Experimentally shown to produce better shifts

Measuring results

  • Compute pixel differences for selected area before and after shifts
  • Compute max, average, and min difference for each pixel
  • Average improvement =(Average difference before shifts) / (Average difference after shifts)

Results

Picture8

 

  • Average improvement of 1.66 for selected area
  • Experiments using an artificial ruler showed slightly less improvement

Comparison with Area Ratio Conflation Algorithm

  • Regions were chosen in areas with large amounts of temperature change
  • Best regions not considered.

    Comparison with Area ratio conflation algorithm.
    Comparison with Area ratio conflation algorithm.

Comparison with UCSB WebReg

  • Our algorithm
  • UCSB method
  • Worst pixel difference: 15.5 average pixel difference: 4.77259 best pixel difference: 0
  • worst pixel difference: 30.45 average pixel difference: 10.29 best pixel difference: 0.08
Points chosen by UCSB method
Points chosen by UCSB method
  • Our algorithm’s average improvement over UCSB WebReg method: 2.715
User selected area of interest in brain image used.
User selected area of interest in brain image used.

Future research

  • Use predictive thermal models to better match images
  • Try to learn parameters that users choose to identify stable image areas
  • Use a database of known anatomical features to help in identifying points that should remain stable

This blog post is based on a paper I coauthored called “An algorithm to stabilize a sequence of thermal brain images” and published in Proceedings of SPIE – The International Society for Optical Engineering 6512 · February 2007.

You can read the full paper here:

https://www.researchgate.net/publication/252234562_An_algorithm_to_stabilize_a_sequence_of_thermal_brain_images

Kovalerchuk, Boris, Joseph Lemley, and Alexander M. Gorbach. “An algorithm to stabilize a sequence of thermal brain images.” Medical Imaging. International Society for Optics and Photonics, 2007.

Stock market prediction using machine learning (elman, regression, and GMDH)

My primary interest is machine learning and computer vision, but in winter quarter, I took a graduate course in computational statistics.

We had a fun group project that involved using R to analyze stock prices which later turned into a presentation at SOURCE 2016 when we added some machine learning techniques to make it more interesting.

There is a great R package called Quantmod which we used to get stock data. “http://www.quantmod.com/

It is very easy to use, for example:


library(quantmod)
library(ggplot2) # Include ggplot so we can graph it. 
start <- as.Date("1986-03-01")
end <-as.Date("2015-12-30")
getSymbols(c('AAPL','MSFT','^IXIC','NDX'), from = start, to = end)

Loads the Quantmod package and gets stock price information between 1986-03-01 and 2015-12-30 for Apple, Microsoft, NASDAQ, and the Nasdaq Composite automatically.

Want to quickly graph the closing prices of Microsoft stocks during that time? That’s just 2 lines of code:



MSFT.df = data.frame(date=time(MSFT), Cl(MSFT))

ggplot(data = MSFT.df, aes(x = date, y = MSFT.Close)) + geom_point() + geom_smooth(se = F) + labs(x = "Date", y = "Close")

Closing prices of Microsoft stock as given by quantmod package and graphed with ggplot2.
Closing prices of Microsoft stock as given by quantmod package and graphed with ggplot2.

As you can see, R facilitates very fast data analytics.

We went on to make some simple predictive regression models and used the R packages RSNNS and GMDH package.

Like most R packages, it’s very easy to use RNSS:


library(quantmod)#for stock data
library(RSNNS) # Stuttguart neural network simulator. 

The training and prediction code segment is here:


modelElman = elman(df$date, df$MSFT.Close, size=8, learnFuncParams=c(0.1),maxit=1000)
predictions = append(pre,predict(modelElman,n+1)[1])

We ran this in a loop to get a series of predictions for various dates.

It’s similarly easy to use the GMDH model:


#####create time series
n = nrow(df)
stock <- ts(df, start=1, end=n, frequency=1)
#####predict
out = fcast(stock, input = 3, layer = 4, f.number = 1, tf = "all")
pre = append(pre,out$mean[1])

We then did a simulation to see which method performs the best on a range of stock values using a simple investment strategy:

Every time the model says the stock prices will go up tomorrow, buy 10 shares.
Every time the model says the stock prices will go down tomorrow: sell everything!
Continue for a year.

Elman neural networks gave the best results on a per stock basis, followed very closely by GMDH and regression far behind. Interestingly, however, if you were to follow this strategy with all the models in 2015, you would actually gain money from both Elman and regression. Surprisingly, GMDH lost money.

This is what you’d make if you used our model and investment strategy using YAHOO, JP morgan, CMS Energy Corporation, Verizon APPLE and Microsoft.

2015:
ELMAN $1334.999
REGRESSION $383.696
GMDH $-623.0998

It’s surprising that an Elman neural network did this well with only closing prices. Obviously, closing prices alone are not very reliable predictors of future stock prices but it managed anyway.

Clearly no one should actually use such a simple method with real money at stake, but it’s still interesting.

I’m graduating and I’ve been accepted to a PhD program in Ireland

I’m very happy to announce that I’m graduating and also, I’ve been accepted to a PhD program in Ireland. I can’t wait to work to push the boundaries of my knowledge of deep learning and image processing.

I defended my Masters Thesis on May 27 and am all set to graduate this spring. The thesis defense went very well. My wife and a friend recorded it. I’d upload a recording of it here but there is a temporary embargo on my thesis because we plan a third publication based on some of the work I’ve been doing the last month or so if it works out.

thesisAnnouncement

On June 11, I’ll be speaking in Atlanta at COMPSAC 2016 to present my research on finding large empty areas in high dimensional spaces.

I also published a full paper at the “Modern AI and Cognitive Science Conference” at the Modern AI and Cognitive Science Conference. The title was: Comparison of Recent Machine Learning Techniques for Gender Recognition from Facial Images and can be accessed here: Modern AI and Cognitive Science 2016 paper 21
I also made a presentation aimed at a more general audience, which I presented at SOURCE 2016 and which can be accessed here: http://digitalcommons.cwu.edu/source/2016/cos/2/

It feels strange (but nice) to not have any urgently pressing deadlines after months of non-stop urgency.