|
Lääketieteellisen fysiikan ja tekniikan yhdistys (LFTY)
Finnish Society for Medical Physics and Medical Engineering
In English
|
LFT-päivä 12.2.2009, Kuopion yliopisto
Posterikilpailuun osallistuneet
1. Ahokas Sari, Tampereen teknillinen yliopisto
Development of low noise active electrodes for EEG measurements.
2. Finnilä Mikko, Oulun yliopisto
Effects of 2,3,7,8-Tetrachlorodibenzo-p-Dioxin Exposure on Bone Material Properties from Macrostructure to Nanostructure.
3. Henelius Andreas, Teknillinen korkeakoulu
Cardiovascular Metrics of Autonomic Nervous System Response to Mental Effort.
4. Holli Kirsi, Tampereen teknillinen yliopisto
Detection of characteristic texture parameters in breast MRI.
5. Koho Sami, Turun yliopisto
Design of a micro volume fluorescence measurement device for immunoassays, using up-converting phosphor labels.
6. Lahtinen Ella, Tampereen teknillinen yliopisto
An Assessment Tool for Assistive Technology.
7. Nieminen Jaakko, Teknillinen korkeakoulu
Enhancement of MRI with SQUID arrays and by polarization encoding.
8. Vuorela Timo, Tampereen teknillinen yliopisto
Unraveling lipoprotein structure via molecular dynamics.
Postereiden abstraktit
Development of low noise active electrodes for EEG measurements
Ahokas Sari, Tampereen teknillinen yliopisto
Nowadays many brain investigation methods exist. EEG measures the electrical activity of the brain and
magnetoencephalography (MEG) detects the magnetic fields that are a result of the electrical activity. Computed
tomography (CT) describes the brain anatomy. Functional Magnetic Resonance Imaging (fMRI) detects the changes in the
blood flow after nerve cell activation. Spectral Emission Tomography (SPECT) and Positron Emission Tomography (PET)
measure the metabolic activity of the brain. The advantage of EEG and MEG over the other brain investigation methods is
their high temporal resolution and therefore the possibility to study the brain function in millisecond time scale. Their
disadvantage is a poor spatial resolution that limits the possibility to find an exact source location. It was long
believed that MEG offers better spatial resolution than EEG due to the high skull resistivity. Nowadays it has been
proven that the skull resistivity is not as high as was thought and that EEG actually offers at least as good spatial
resolution as MEG or even better. The EEG recording system is cheaper and smaller than the MEG device and it has lower
restrictions for the patient movement. In the EEG measurement the spatial resolution can be increased by using larger
number of electrodes and low noise measurement instruments. The advantages of EEG measurement and possibilities of better
signal quality make it an interesting research topic.
Nowadays due to the development of electronics and especially integrated circuits the possibilities to design small sized
active electrodes exist. Active electrodes are a developed version of commonly known passive electrodes and they have an
electronic circuit integrated into the electrode. Active electrodes act as an impedance converter and can also have other
properties such as amplification, filtering or impedance detection. Active electrodes are more noise tolerant and do not
require skin preparation. The need for active electrodes exists especially in high resolution EEG measurements were large
number of electrodes are used to obtain the best possible accuracy and signal quality, and in emergency medicine where
fast electrode placement is essential. This Master of Science thesis is part of an ongoing research project where
objectives are to develop methods and instrumentation to record electric fields of the brain more accurately. The main
purpose of this project was to develop a simple and low noise active electrode that can be used in high resolution EEG
measurements. Also solutions for electrode wiring were discussed during the design process.
Effects of 2,3,7,8-Tetrachlorodibenzo-p-Dioxin Exposure on Bone Material Properties from Macrostructure to Nanostructure
Finnilä Mikko, Oulun yliopisto
Waste management, gas engines, geological processes and even tobacco smoking may produced aromatic chlorinated compounds
called dioxins. Dioxins are notorious compounds, which are very persistent and tend to accumulate in food chain. These
compounds are lipophilic and very stable in nature and metabolism. Dioxins are potent endocrine disrupting agents and as
such they may also have adverse effects in bone quality as shown in animal experiments. Dioxins have been shown to
decrease bone strength, architecture and mineral density. However detailed effects of dioxins on bone material properties
are unknown. The aim of this study was to define the effects of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) on bone
material properties investigated by nanoindentation and micro-X-ray-diffraction.
The experimental animals were exposed to TCDD in utero and measurements were done in two time points; of post natal days
(PND) 35 and 70. Both analysis groups consisted of the animals exposed with the maximal dosage of 1 ìg/kg (N=7-8 per
group) or vehicle as a control. Animals were sacrificed and the long bones were dissected out. Peripheral quantitative
computed tomography (pQCT) measurements and mechanical testing had been performed previously. The current measurements by
nanoindentation and micro-X-ray-diffraction (N=1 per group) were done from the same skeletal locations as the earlier
measurements.
Nanoindentation results showed that TCDD delays age-dependent development of bone material (maturation) leaving bones
more elastic and softer. Also new findings of correlations between material parameters and bone mineral properties were
done.
Cardiovascular Metrics of Autonomic Nervous System Response to Mental Effort
Henelius Andreas, Teknillinen korkeakoulu
The amount of mental workload in occupational tasks has increased, calling for the measurement of mental workload to
guarantee safe and efficient working conditions. The measurement of mental workload is especially important in
safety-critical professions, where metrics of mental workload could be used as pre-alarm systems for operators. The aim
of this thesis was to investigate the response of cardiovascular metrics to mental workload. The correlation between
perceived subjective task difficulty, measured using the NASA Task Load Index (TLX), and objective task difficulty,
measured by cardiovascular metrics, was investigated. The performance of the Surgical Stress Index (SSI) in the novel
application as a metric of mental workload was studied.
Electrocardiograms, continuous blood pressure, respiration and photoplethysmographic (PPG) waveform were recorded from
subjects during the performance of a computerised multitask test inducing different levels of mental workload. Several
time and frequency-domain cardiovascular metrics were calculated from the recorded signals. Differences between low and
high mental workload was studied and the classification performance of the cardiovascular metrics was analysed using
prediction probability and receiver operating characteristics (ROC) analysis.
The agreement between subjective and objective workload measures was good. The time-domain metrics performed better than
the frequencydomain metrics. The cardiovascular metrics of blood pressure, heart rate and vasoconstriction that best
classified mental workload were, in order, average mean blood pressure, average interbeat interval length and standard
deviation of photoplethysmographic amplitude. The SSI was found to be a promising index of mental workload.
The findings indicate that the cardiovascular metrics studied in this thesis can be used in measuring mental workload,
although further research is needed to establish normal levels for different types and degrees of workload.
Detection of characteristic texture parameters in breast MRI
Holli Kirsi, Tampereen teknillinen yliopisto
Breast cancer is the most common cancer in women. Breast MRI (BMRI) has emerged as a promising technique for detecting,
diagnosing, and staging the condition. Automated image analysis aims to extract relevant information from MR images of
the breast and improve the accuracy and consistency of image interpretation. Texture analysis (TA) is one possible means
of detecting tissue features in biomedical images. The primary aim of this work was to develop a method to easily and
efficiently perform the comparison and evaluation of calculated breast MRI texture parameters. The specific aim was to
evaluate the parameters that identify the most important breast cancer characteristics and to assess the ability of MRI
TA to characterize breast cancer tissue.
Eight patients with histopathologically proven breast cancer were selected in this preliminary study. The texture
analysis was performed with MaZda texture application by using histogram, gradient, run-length matrix, co-occurrence
matrix, autoregressive model and wavelet parameters for tissue classification. The most discriminant texture features
identified by Fisher coefficients and POE+ACC (probability of classification error and average correlation coefficients)
between breast cancer tissue and reference tissue from the healthy breast and tissue next to cancer area, between
patient, different image series and histological types of (ductal vs. lobular) carcinomas were evaluated. Raw data
analysis (RDA), principal component analysis (PCA), linear discriminant analysis (LDA), and nonlinear discriminant
analysis (NDA) were run for each subset of images and chosen texture features.
The developed Excel documentation system enabled faster and more effective way to compare different sets of texture
features which speeded up the feature comparison process considerably. The results revealed differences in the textures
in every imaging series when non-cancer tissue and cancer tissue were compared. Also selected texture parameters were
different between the two histological groups. Data analyses RDA, PCA, LDA and NDA performed well on the task
discriminating breast cancer area from healthy breast area and the best discrimination results were obtained within two
dynamic contrast-enhanced MRI subtraction series.
Texture analysis applied to breast MRI seems to be a potential tool in detecting and discriminating malignant and normal
breast tissues and different histopathological breast cancer types, but further research is needed on larger data
sets. More studies of textural features in various scales and configurations are necessary in order to select a useful
set of texture descriptors, optimal for the specific task of normal and abnormal breast tissue characterization. A part
of this work was accepted in the 4th European Biomedical Engineering Congress which took place in November 2008 in
Antwerp Belgium. The abstract has been published in the Abstract book of the MBEC 2008 Congress and a short paper in
IFMBE Proceedings ECIFMBE 2008 "4th European Conference of the International Federation for Medical and Biological
Engineering" Vol. 22, 2008, Antwerp, Belgium.
Design of a micro volume fluorescence measurement device for immunoassays, using up-converting phosphor labels
Koho Sami, Turun yliopisto
Immunoassays are popular methods for monitoring bio-affinity binding processes taking place for example in human blood as
a result of an immunoresponse. Both information about the quantity of bound molecules, and about the quality of the
binding process, such as speed or specificity can be extracted. After the first immunoassay method presented by Berson
and Yallow in 1959, the ability to monitor bio-affinity binding has been found useful in a wide array of fields, from
food industry and pharmaceutical development to various bio-sciences and of course medical diagnostics. The need for cost
savings particularly in industrial applications, where a high amount of samples needs to be measured daily, has led to
the development of automated immunoassay measurement technologies, that would ideally have as short as possible
measurement time, with as low as possible material costs. To face these technological challenges TPX measurement
technology was developed in the Laboratory of Biophysics at the University of Turku during the late 1990's. Taking
advantage of the research group's expertise in optics and luminescent materials, the TPX technology performs micro-volume
optical measurements on luminescent labels, taking advantage of a non-linear phenomenon called two-photon up-conversion,
seen in certain luminescent labels when excited with high intensity infra-red light.
During the current project a new technology, one might say a further development of the TPX, was realised. The same idea
of a micro-volume optical measurement set-up was adapted to use a new form of luminescent probes, called up-converting
phosphors (UPT). The new labels have a number of advantages when compared to the ones used for two-photon up-conversion,
amongst which degrees of magnitude higher conversion efficiency and the complete absence of photo-bleaching, which at
least in theory should allow cost savings to be made in high throughput environments, by way of further reduction in
reaction volume. While the basic idea remained the same as in the original TPX, in practice the whole measurement system
was re-designed from the beginning. A new embedded control system was realised using an Atmel embedded microcontroller,
and new data acquisition circuits for the photo-detectors were realised due to changes in the optical signal. Embedded
software for the control logic was written in C programming language, and a PC - software for performing the measurements
was realised with LabView. For the optical system, a new continuous wave infra-red laser with 980nm wavelength was
acquired, whereas the rest was kept mostly as it was in the original TPX device.
The designed device has been found to be able to detect fluorescence at extremely low excitation light intensities,
reaching down to parts of a milliwatt of optical power, whereas for example with rhodamine labels widely used in TPX, at
least some tens of milliwatts were required. As in practice the excitation intensity would be kept at reasonably high
levels for particle trapping purposes, the superior conversion efficiency of the used labels has been seen to produce a
fair amount of signal at low label concentrations. This is all as expected, and would suggest that the designed system
performs as it should. However no real assay measurements have been done with the device yet, as the focus has been on
the hardware testing. The project is still underway, and the assay performance results will be presented shortly
(hopefully already at the LFT days in Kuopio).
An Assessment Tool for Assistive Technology
Lahtinen Ella, Tampereen teknillinen yliopisto
The primary aim of this research was to develop an instrument for the assessment of assistive technology (AT) devices and
services. Specifically, the aim was to develop a customer orientated instrument that can be used to assess the user
satisfaction with AT.
The research consists of two parts. In the literature study part a number of available, already existing, AT assessment
tools are reviewed. In the second part a new assessment tool, the ITSE-Assessment Model, is developed.
ITSE-Assessment Model is a questionnaire that can be filled in by interviewing a final user of AT. The questionnaire
consists of four dimensions of AT: usability, utility, quality of service and costs, and it can be used to assess any
kinds of AT. The interviewee need only be familiar with the AT that is being assessed, and the interviewer with the
ITSE-Assessment Model.
When the ITSE-Assessment Model was developed, some of the functionally perceived instruments presented earlier in this
work were used as a background. In addition to this the ideas and opinions of AT specialists were taken into account when
determining the content of the questionnaire.
The testing of the ITSE-Assessment Model was carried out, and as a result the ITSE-Assessment Model proved to be a
functional instrument that can be used to assess AT closely. However, for easier use of the ITSE-Assessment Model, some
improvement recommendations should be carried out and in the future the validity of the instrument should be tested in a
more systematic way.
Anyhow, the potential of the ITSE-Assessment Model lies precisely in the examining manner of it. The ITSE-Assessment
Model is intended to create discussion about the challenges of AT and to set out improvement proposals. At its best, the
ITSE-Assessment Model helps the communication of final users and designers of AT and in this way contributes to the
improvement of AT devices and services.
Enhancement of MRI with SQUID arrays and by polarization encoding
Nieminen Jaakko, Teknillinen korkeakoulu
Magnetic resonance imaging (MRI) is a method to study the interior structure of matter. It is based on detecting
precession signals from a magnetized sample. The detection can be speeded up by using sensor arrays. In low-field MRI, a
sample is polarized in a magnetic field of several millitesla. The polarization is followed by SQUID-based signal
detection at microtesla-region fields. In this study, a new encoding method to reduce imaging times in low-field MRI is
developed.
In MRI, signals of a sensor array can be written in vector form s(t) = Am(t), where A is a lead field matrix and m(t)
contains the components of the voxel magnetizations. In polarization encoding, various polarizing fields affect the
initial magnetization of the sample in consecutive measurements. Assume that for the kth measurement the magnetizations
are m_k(t) = C_km(t), where C_k is a conversion matrix. Then s_k(t) = AC_km(t). A large signal vector s’ and a generalized
lead field matrix A’ are constructed combining s_k:s and AC_k:s row-wise, respectively. Then, s’(t) = A’m(t). If the
polarizing fields are chosen properly, rank A’ > rank A, i.e., the number of linearly independent sensors is
increased.
The proposed method was tested with low-field MRI simulations using the geometry of a 304 channel SQUID system. Noisy
signals from a 15×15 voxels phantom were simulated to the SQUIDs. The inverse problem of image reconstruction was solved
using truncated singular value decomposition. The results show that by increasing the number of polarizing fields, the
imaging times are reduced. Equally, in a given time the imaging quality can be improved by increasing the number of
polarizing fields.
In this study, a new encoding method was developed for MRI. Polarization encoding introduces additional independent
sensors and increases the information about the sample. The method reduces imaging times. It is possible to combine the
method with other encoding methods; for instance, one dimension can be encoded by polarization encoding and the others by
Fourier encoding techniques. Polarization encoding is especially suitable for low-field MRI, because at low fields it is
possible to construct sets of polarizing fields. Moreover, at low fields, imaging processes need improvements because
current methods are slow and suffer from poor image quality. In addition to MRI, polarization encoding can also be used
to improve other imaging techniques, such as magnetorelaxometry.
Unraveling lipoprotein structure via molecular dynamics
Vuorela Timo, Tampereen teknillinen yliopisto
Cardiovascular diseases are the primary cause of death in the industrialized countries. Heart strokes and cerebral
infarctions are the end conditions of a cardiovascular disease called atherosclerosis. According to the current view, it
is inflicted and further restrained by lipoproteins circulating in blood. In particular, low density lipoprotein (LDL)
levels have been found to correlate positively and high density lipoprotein (HDL) levels inversely with the risk of
atherosclerosis.
Lipoproteins are complex macromolecular structures designed to carry cholesterol in an esterified form inside the
body. They consist of a spherical lipid part surrounded partly by a protein part. Although molecular compositions are
quite well understood, the molecular-scale structures have not been solved. In this study we use new computational
methods to get information about the molecular-scale ordering in different sized lipoprotein particles.
In this study the structure of two different lipoprotein particles were explored by using classical molecular dynamics
methods. The smaller system was constructed to model the lipid part of an HDL particle and the larger system the lipid
part of an LDL particle. The LDL particle contains over 3000 molecules and is more than three times larger than the
systems in current state-of-the-art studies. Also the time scales in this study are over a magnitude longer than in the
earlier studies. To achieve the length and time scales used in this study a method called coarse graining has been
used. In coarse graining on average four carbon atoms are substituted by one bead. Coarse graining reduces the complexity
of the systems while preserving the behavior of different molecules. Extensive structural analysis is performed for the
general structure as well as conformations of each molecule type. Also the dynamics of the whole lipoprotein particles as
well as the individual components is studied.
The current view of the lipoprotein structure is that the particles are composed of a disordered hydrophobic core and a
hydrophilic surface. The results of this study support the view, but also reveal that there is an additional layer
between the core and the surface. The results show significant ordering of the core lipids close to the surface that is
not observed in the core region. Cholesteryl esther molecules in the interface order parallel to the phospholipid tails
similar to cholesterol molecules. Also triacylglycerols in the interface have more compact packing and have a tendency to
orientate in a way that minimizes contacts with water. The diffusion rates of different components are in good agreement
with the results from earlier experimental and computational studies.
This study demonstrates that molecular dynamics and especially coarse grained methods can be used to give valuable
insight into the structures of lipoproteins, unreachable to other methods. The results are in line with the current view
of the lipoprotein particle structure and, in addition, support a recently presented new three-layer structure for
lipoprotein particles. This study gives a solid foundation for future projects which aim to better understanding about
the structure-function relationship of lipoproteins.