placeholder
and
and

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
Language
Year
  • 1
    Language: English
    In: IEEE transactions on medical imaging, 2016-05, Vol.35 (5), p.1262-1272
    Description: The choice of features greatly influences the performance of a tissue classification system. Despite this, many systems are built with standard, predefined filter banks that are not optimized for that particular application. Representation learning methods such as restricted Boltzmann machines may outperform these standard filter banks because they learn a feature description directly from the training data. Like many other representation learning methods, restricted Boltzmann machines are unsupervised and are trained with a generative learning objective; this allows them to learn representations from unlabeled data, but does not necessarily produce features that are optimal for classification. In this paper we propose the convolutional classification restricted Boltzmann machine, which combines a generative and a discriminative learning objective. This allows it to learn filters that are good both for describing the training data and for classification. We present experiments with feature learning for lung texture classification and airway detection in CT images. In both applications, a combination of learning objectives outperformed purely discriminative or generative learning, increasing, for instance, the lung tissue classification accuracy by 1 to 8 percentage points. This shows that discriminative learning can help an otherwise unsupervised feature learner to learn filters that are optimized for classification.
    Subject(s): Algorithms ; Analysis ; Computed tomography ; CT imaging ; Deep learning ; Diagnostic imaging ; Discrimination learning ; Feature extraction ; Humans ; Image Processing, Computer-Assisted ; Learning systems ; Lung ; Lung - diagnostic imaging ; Lungs ; Machine Learning ; Neural network ; Neural networks ; Neural Networks (Computer) ; Pattern recognition and classification ; Representation learning ; Research ; Restricted Boltzmann machine ; Segmentation ; Standards ; Tomography, X-Ray Computed - methods ; Training data ; Usage ; X-ray imaging and computed tomography
    ISSN: 0278-0062
    E-ISSN: 1558-254X
    Source: IEEE Electronic Library (IEL)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Language: English
    In: IEEE transactions on medical imaging, 2019-02, Vol.38 (2), p.638-648
    Description: Machine learning algorithms can have difficulties adapting to data from different sources, for example from different imaging modalities. We present and analyze three techniques for unsupervised cross-modality feature learning, using a shared auto-encoder-like convolutional network that learns a common representation from multi-modal data. We investigate a form of feature normalization, a learning objective that minimizes cross-modality differences, and modality dropout, in which the network is trained with varying subsets of modalities. We measure the same-modality and cross-modality classification accuracies and explore whether the models learn modality-specific or shared features. This paper presents experiments on two public data sets, with knee images from two MRI modalities, provided by the Osteoarthritis Initiative, and brain tumor segmentation on four MRI modalities from the BRATS challenge. All three approaches improved the cross-modality classification accuracy, with modality dropout and per-feature normalization giving the largest improvement. We observed that the networks tend to learn a combination of cross-modality and modality-specific features. Overall, a combination of all three methods produced the most cross-modality features and the highest cross-modality classification accuracy, while maintaining most of the same-modality accuracy.
    Subject(s): autoencoders ; Biomedical imaging ; Computed tomography ; Computer Science ; Computer Science, Interdisciplinary Applications ; Decoding ; Deep Learning ; Encoding ; Engineering ; Engineering, Biomedical ; Engineering, Electrical & Electronic ; Humans ; Image Processing, Computer-Assisted - methods ; Image reconstruction ; Imaging Science & Photographic Technology ; Knee Joint - diagnostic imaging ; Life Sciences & Biomedicine ; Magnetic Resonance Imaging ; Multimodal Imaging - methods ; Radiology, Nuclear Medicine & Medical Imaging ; Representation learning ; Science & Technology ; Technology ; Training ; transfer learning
    ISSN: 0278-0062
    E-ISSN: 1558-254X
    Source: Web of Science - Science Citation Index Expanded - 2019〈img src="http://exlibris-pub.s3.amazonaws.com/fromwos-v2.jpg" /〉
    Source: IEEE Electronic Library (IEL)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Language: English
    In: IEEE transactions on medical imaging, 2015-05, Vol.34 (5), p.1018-1030
    Description: The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%.
    Subject(s): Biomedical imaging ; Brain - pathology ; Humans ; Image Processing, Computer-Assisted - methods ; Image segmentation ; Kernel ; Machine Learning ; Magnetic Resonance Imaging ; Multiple Sclerosis - pathology ; pattern recognition ; Pattern Recognition, Automated - methods ; Protocols ; Support Vector Machine ; Support vector machines ; Training ; Training data ; transfer learning
    ISSN: 0278-0062
    E-ISSN: 1558-254X
    Source: IEEE Electronic Library (IEL)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Language: English
    In: IEEE transactions on medical imaging, 2019-01, Vol.38 (1), p.213-224
    Description: Many medical image segmentation methods are based on the supervised classification of voxels. Such methods generally perform well when provided with a training set that is representative of the test images to the segment. However, problems may arise when training and test data follow different distributions, for example, due to differences in scanners, scanning protocols, or patient groups. Under such conditions, weighting training images according to distribution similarity have been shown to greatly improve performance. However, this assumes that a part of the training data is representative of the test data; it does not make unrepresentative data more similar. We, therefore, investigate kernel learning as a way to reduce differences between training and test data and explore the added value of kernel learning for image weighting. We also propose a new image weighting method that minimizes maximum mean discrepancy (MMD) between training and test data, which enables the joint optimization of image weights and kernel. Experiments on brain tissue, white matter lesion, and hippocampus segmentation show that both kernel learning and image weighting, when used separately, greatly improve performance on heterogeneous data. Here, MMD weighting obtains similar performance to previously proposed image weighting methods. Combining image weighting and kernel learning, optimized either individually or jointly, can give a small additional improvement in performance.
    Subject(s): Algorithms ; Biomedical imaging ; Brain ; Computer Science ; Computer Science, Interdisciplinary Applications ; Engineering ; Engineering, Biomedical ; Engineering, Electrical & Electronic ; Hippocampus - diagnostic imaging ; Humans ; image analysis ; Image Processing, Computer-Assisted - methods ; Image segmentation ; Imaging Science & Photographic Technology ; Kernel ; Learning systems ; Life Sciences & Biomedicine ; Machine learning ; magnetic resonance imaging ; Magnetic Resonance Imaging - methods ; Probability density function ; Radiology, Nuclear Medicine & Medical Imaging ; Science & Technology ; supervised learning ; Supervised Machine Learning ; Technology ; Training ; White Matter - diagnostic imaging
    ISSN: 0278-0062
    E-ISSN: 1558-254X
    Source: Web of Science - Science Citation Index Expanded - 2019〈img src="http://exlibris-pub.s3.amazonaws.com/fromwos-v2.jpg" /〉
    Source: IEEE Electronic Library (IEL)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Language: English
    In: Medical physics (Lancaster), 2019-10, Vol.46 (10), p.4431-4440
    Description: Purpose In this work, we adapt a method based on multiple hypothesis tracking (MHT) that has been shown to give state‐of‐the‐art vessel segmentation results in interactive settings, for the purpose of extracting trees. Methods Regularly spaced tubular templates are fit to image data forming local hypotheses. These local hypotheses are then used to construct the MHT tree, which is then traversed to make segmentation decisions. Some critical parameters in the method, we base ours on, are scale‐dependent and have an adverse effect when tracking structures of varying dimensions. We propose to use statistical ranking of local hypotheses in constructing the MHT tree which yields a probabilistic interpretation of scores across scales and helps alleviate the scale dependence of MHT parameters. This enables our method to track trees starting from a single seed point. Results The proposed method is evaluated on chest computed tomography data to extract airway trees and coronary arteries and compared to relevant baselines. In both cases, we show that our method performs significantly better than the Original MHT method in semiautomatic setting. Conclusions The statistical ranking of local hypotheses introduced allows the MHT method to be used in noninteractive settings yielding competitive results for segmenting tree structures.
    Subject(s): airways ; Computed Tomography Angiography ; Coronary Vessels - diagnostic imaging ; Humans ; Image Processing, Computer-Assisted - methods ; Imaging, Three-Dimensional ; Life Sciences & Biomedicine ; multiple hypothesis tracking ; Radiation Dosage ; Radiology, Nuclear Medicine & Medical Imaging ; Science & Technology ; Thorax - diagnostic imaging ; Tomography, X-Ray Computed ; tree segmentation ; vessels
    ISSN: 0094-2405
    E-ISSN: 2473-4209
    Source: Web of Science - Science Citation Index Expanded - 2019〈img src="http://exlibris-pub.s3.amazonaws.com/fromwos-v2.jpg" /〉
    Source: Wiley Online Library All Journals
    Source: Get It Now
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Language: English
    In: IEEE transactions on medical imaging, 2010-02, Vol.29 (2), p.559-569
    Description: We aim at improving quantitative measures of emphysema in computed tomography (CT) images of the lungs. Current standard measures, such as the relative area of emphysema (RA), rely on a single intensity threshold on individual pixels, thus ignoring any interrelations between pixels. Texture analysis allows for a much richer representation that also takes the local structure around pixels into account. This paper presents a texture classification-based system for emphysema quantification in CT images. Measures of emphysema severity are obtained by fusing pixel posterior probabilities output by a classifier. Local binary patterns (LBP) are used as texture features, and joint LBP and intensity histograms are used for characterizing regions of interest (ROIs). Classification is then performed using a k nearest neighbor classifier with a histogram dissimilarity measure as distance. A 95.2% classification accuracy was achieved on a set of 168 manually annotated ROIs, comprising the three classes: normal tissue, centrilobular emphysema, and paraseptal emphysema. The measured emphysema severity was in good agreement with a pulmonary function test (PFT) achieving correlation coefficients of up to | r | = 0.79 in 39 subjects. The results were compared to RA and to a Gaussian filter bank, and the texture-based measures correlated significantly better with PFT than did RA.
    Subject(s): Algorithms ; Analysis ; Area measurement ; Computed tomography ; Current measurement ; Diagnosis ; Diagnostic imaging ; Emphysema ; Emphysema, Pulmonary ; Female ; Histograms ; Humans ; Image Interpretation, Computer-Assisted - methods ; Image processing ; Image texture analysis ; local binary patterns (LBPs) ; Lung - diagnostic imaging ; Lungs ; Male ; Measurement standards ; Nearest neighbor searches ; Normal Distribution ; Pattern analysis ; Performance evaluation ; Pulmonary Emphysema - classification ; Pulmonary Emphysema - diagnostic imaging ; quantitative computed tomography (CT) ; Research ; Respiratory Function Tests ; Severity of Illness Index ; Smoking ; texture analysis ; tissue classification ; Tomography ; Tomography, X-Ray Computed - methods ; Usage
    ISSN: 0278-0062
    E-ISSN: 1558-254X
    Source: IEEE Electronic Library (IEL)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    Language: English
    In: European radiology, 2017-05-18, Vol.27 (11), p.4680-4689
    Description: Objectives To quantify airway and artery (AA)-dimensions in cystic fibrosis (CF) and control patients for objective CT diagnosis of bronchiectasis and airway wall thickness (AWT). Methods Spirometer-guided inspiratory and expiratory CTs of 11 CF and 12 control patients were collected retrospectively. Airway pathways were annotated semi-automatically to reconstruct three-dimensional bronchial trees. All visible AA-pairs were measured perpendicular to the airway axis. Inner, outer and AWT (outer−inner) diameter were divided by the adjacent artery diameter to compute A in A-, A out A- and A WT A-ratios. AA-ratios were predicted using mixed-effects models including disease status, lung volume, gender, height and age as covariates. Results Demographics did not differ significantly between cohorts. Mean AA-pairs CF: 299 inspiratory; 82 expiratory. Controls: 131 inspiratory; 58 expiratory. All ratios were significantly larger in inspiratory compared to expiratory CTs for both groups (p〈0.001). A out A- and A WT A-ratios were larger in CF than in controls, independent of lung volume (p〈0.01). Difference of A out A- and A WT A-ratios between patients with CF and controls increased significantly for every following airway generation (p〈0.001). Conclusion Diagnosis of bronchiectasis is highly dependent on lung volume and more reliably diagnosed using outer airway diameter. Difference in bronchiectasis and AWT severity between the two cohorts increased with each airway generation. Key points • More peripheral airways are visible in CF patients compared to controls. • Structural lung changes in CF patients are greater with each airway generation. • Number of airways visualized on CT could quantify CF lung disease. • For objective airway disease quantification on CT, lung volume standardization is required.
    Subject(s): Adolescent ; Airway dimensions ; Analysis ; Bronchi - diagnostic imaging ; Bronchiectasis ; Bronchiectasis - diagnostic imaging ; Bronchiectasis - etiology ; Chest ; Child ; CT ; Cystic fibrosis ; Cystic Fibrosis - complications ; Cystic Fibrosis - diagnostic imaging ; Diagnosis ; Diagnostic Radiology ; Exhalation ; Female ; Humans ; Imaging ; Imaging / Radiology ; Imaging/CT ; Internal Medicine ; Interventional Radiology ; Lung - diagnostic imaging ; Male ; Medicine ; Medicine & Public Health ; Neuroradiology ; Observer Variation ; Paediatric lung disease ; Pulmonary Artery - diagnostic imaging ; Radiographic Image Interpretation, Computer-Assisted - methods ; Radiology ; Resveratrol ; Retrospective Studies ; Spirometry - methods ; Tomography, X-Ray Computed - methods ; Ultrasound
    ISSN: 0938-7994
    E-ISSN: 1432-1084
    Source: Alma/SFX Local Collection
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    Conference Proceeding
    Conference Proceeding
    2014
    ISBN: 9783319139715  ISBN: 3319139711 
    ISSN: 0302-9743 
    Language: English
    In: Lecture notes in computer science, 2014-01-01, p.47-58
    Description: markdownabstract__Abstract__ Performance of automated tissue classification in medical imaging depends on the choice of descriptive features. In this paper, we show how restricted Boltzmann machines (RBMs) can be used to learn features that are especially suited for texture-based tissue classification. We introduce the convolutional classification RBM, a combination of the existing convolutional RBM and classification RBM, and use it for discriminative feature learning. We evaluate the classification accuracy of convolutional and non-convolutional classification RBMs on two lung CT problems. We find that RBM-learned features outperform conventional RBM-based feature learning, which is unsupervised and uses only a generative learning objective, as well as often-used filter banks. We show that a mixture of generative and discriminative learning can produce filters that give a higher classification accuracy.
    Subject(s): Classification Accuracy ; classification restricted Boltzmann machines ; Convolutional Neural Network ; Feature Learning ; Filter Bank ; Hide Node ; lung tissue qualification ; representation learning ; tissue qualification
    ISBN: 9783319139715
    ISBN: 3319139711
    ISSN: 0302-9743
    E-ISSN: 1611-3349
    Source: Springer Lecture Notes in Computer Science eBooks
    Source: Alma/SFX Local Collection
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 9
    Language: English
    In: NeuroImage (Orlando, Fla.), 2019-01-15, Vol.185, p.534-544
    Description: Enlarged perivascular spaces (PVS) are structural brain changes visible in MRI, are common in aging, and are considered a reflection of cerebral small vessel disease. As such, assessing the burden of PVS has promise as a brain imaging marker. Visual and manual scoring of PVS is a tedious and observer-dependent task. Automated methods would advance research into the etiology of PVS, could aid to assess what a “normal” burden is in aging, and could evaluate the potential of PVS as a biomarker of cerebral small vessel disease. In this work, we propose and evaluate an automated method to quantify PVS in the midbrain, hippocampi, basal ganglia and centrum semiovale. We also compare associations between (earlier established) determinants of PVS and visual PVS scores versus the automated PVS scores, to verify whether automated PVS scores could replace visual scoring of PVS in epidemiological and clinical studies. Our approach is a deep learning algorithm based on convolutional neural network regression, and is contingent on successful brain structure segmentation. In our work we used FreeSurfer segmentations. We trained and validated our method on T2-contrast MR images acquired from 2115 subjects participating in a population-based study. These scans were visually scored by an expert rater, who counted the number of PVS in each brain region. Agreement between visual and automated scores was found to be excellent for all four regions, with intraclass correlation coefficients (ICCs) between 0.75 and 0.88. These values were higher than the inter-observer agreement of visual scoring (ICCs between 0.62 and 0.80). Scan-rescan reproducibility was high (ICCs between 0.82 and 0.93). The association between 20 determinants of PVS, including aging, and the automated scores were similar to those between the same 20 determinants of PVS and visual scores. We conclude that this method may replace visual scoring and facilitate large epidemiological and clinical studies of PVS.
    Subject(s): Computer science ; Data mining ; Deep learning ; Dementia ; Enlarged perivascular spaces ; Epidemiology ; Life Sciences & Biomedicine ; Machine learning ; Medical informatics ; Medical research ; Medicine, Experimental ; Neural networks ; Neuroimaging ; Neurosciences ; Neurosciences & Neurology ; Perivascular spaces ; Radiology, Nuclear Medicine & Medical Imaging ; Science & Technology ; Virchow-Robin spaces
    ISSN: 1053-8119
    E-ISSN: 1095-9572
    Source: Web of Science - Science Citation Index Expanded - 2019〈img src="http://exlibris-pub.s3.amazonaws.com/fromwos-v2.jpg" /〉
    Source: DOAJ Directory of Open Access Journals - Not for CDI Discovery
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 10
    Article
    Article
    2015
    ISBN: 331924552X  ISBN: 9783319245522 
    ISSN: 0302-9743 
    Language: English
    In: Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015, 2015-11-18, p.539-546
    Description: We address the problem of instance label stability in multiple instance learning (MIL) classifiers. These classifiers are trained only on globally annotated images (bags), but often can provide fine-grained annotations for image pixels or patches (instances). This is interesting for computer aided diagnosis (CAD) and other medical image analysis tasks for which only a coarse labeling is provided. Unfortunately, the instance labels may be unstable. This means that a slight change in training data could potentially lead to abnormalities being detected in different parts of the image, which is undesirable from a CAD point of view. Despite MIL gaining popularity in the CAD literature, this issue has not yet been addressed. We investigate the stability of instance labels provided by several MIL classifiers on 5 different datasets, of which 3 are medical image datasets (breast histopathology, diabetic retinopathy and computed tomography lung images). We propose an unsupervised measure to evaluate instance stability, and demonstrate that a performance-stability trade-off can be made when comparing MIL classifiers.
    Subject(s): Compute Tomographic Colonography ; Diabetic Retinopathy ; Multiple Instance Learning ; Pareto Frontier ; Positive Instance
    ISBN: 331924552X
    ISBN: 9783319245522
    ISSN: 0302-9743
    E-ISSN: 1611-3349
    Source: Springer Lecture Notes in Computer Science eBooks
    Source: Alma/SFX Local Collection
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...