placeholder
and
and

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Proceed order?

Export
Filter
Language
Year
  • 1
    Language: English
    In: Computational and mathematical methods in medicine, 2015-03-01, Vol.2015, p.450341-23
    Description: Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain’s anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation.
    Subject(s): Reproducibility of Results ; Algorithms ; Humans ; Brain - pathology ; Magnetic Resonance Imaging - methods ; Software ; Brain Mapping - methods ; Image Processing, Computer-Assisted - methods ; Imaging, Three-Dimensional ; Normal Distribution ; Cluster Analysis ; Brain ; Usage ; Magnetic resonance imaging ; Medical imaging equipment ; Index Medicus ; Review
    ISSN: 1748-670X
    E-ISSN: 1748-6718
    Source: Academic Search Ultimate
    Source: PubMed Central
    Source: Directory of Open Access Journals
    Source: © ProQuest LLC All rights reserved〈img src="https://exlibris-pub.s3.amazonaws.com/PQ_Logo.jpg" style="vertical-align:middle;margin-left:7px"〉
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Language: English
    In: Sensors (Basel, Switzerland), 2019-01-18, Vol.19 (2), p.391
    Description: In this paper, we present a novel 2D⁻3D pedestrian tracker designed for applications in autonomous vehicles. The system operates on a tracking by detection principle and can track multiple pedestrians in complex urban traffic situations. By using a behavioral motion model and a non-parametric distribution as state model, we are able to accurately track unpredictable pedestrian motion in the presence of heavy occlusion. Tracking is performed independently, on the image and ground plane, in global, motion compensated coordinates. We employ Camera and LiDAR data fusion to solve the association problem where the optimal solution is found by matching 2D and 3D detections to tracks using a joint log-likelihood observation model. Each 2D⁻3D particle filter then updates their state from associated observations and a behavioral motion model. Each particle moves independently following the pedestrian motion parameters which we learned offline from an annotated training dataset. Temporal stability of the state variables is achieved by modeling each track as a Markov Decision Process with probabilistic state transition properties. A novel track management system then handles high level actions such as track creation, deletion and interaction. Using a probabilistic track score the track manager can cull false and ambiguous detections while updating tracks with detections from actual pedestrians. Our system is implemented on a GPU and exploits the massively parallelizable nature of particle filters. Due to the Markovian nature of our track representation, the system achieves real-time performance operating with a minimal memory footprint. Exhaustive and independent evaluation of our tracker was performed by the KITTI benchmark server, where it was tested against a wide variety of unknown pedestrian tracking situations. On this realistic benchmark, we outperform all published pedestrian trackers in a multitude of tracking metrics.
    Subject(s): multi-object tracking ; behavioral ; LiDAR ; pedestrian tracking ; sensor fusion ; autonomous vehicle ; driverless car ; particle filter
    ISSN: 1424-8220
    E-ISSN: 1424-8220
    Source: Academic Search Ultimate
    Source: PubMed Central
    Source: Directory of Open Access Journals
    Source: Alma/SFX Local Collection
    Source: © ProQuest LLC All rights reserved〈img src="https://exlibris-pub.s3.amazonaws.com/PQ_Logo.jpg" style="vertical-align:middle;margin-left:7px"〉
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Language: English
    In: IEEE transactions on geoscience and remote sensing, 2016-03, Vol.54 (3), p.1738-1756
    Description: Extended attribute profiles (EAPs) have been widely used for the classification of high-resolution hyperspectral images. EAPs are obtained by computing a sequence of attribute operators. Attribute filters (AFs) are connected operators, so they can modify an image by only merging its flat zones. These filters are effective when dealing with very high resolution images since they preserve the geometrical characteristics of the regions that are not removed from the image. However, AFs, being connected filters, suffer the problem of "leakage" (i.e., regions related to different structures in the image that happen to be connected by spurious links will be considered as a single object). Objects expected to disappear at a certain threshold remain present when they are connected with other objects in the image. The attributes of small objects will be mixed with their larger connected objects. In this paper, we propose a novel framework for morphological AFs with partial reconstruction and extend it to the classification of high-resolution hyperspectral images. The ultimate goal of the proposed framework is to be able to extract spatial features which better model the attributes of different objects in the remote sensed imagery, which enables better performances on classification. An important characteristic of the presented approach is that it is very robust to the ranges of rescaled principal components, as well as the selection of attribute values. Our experimental results, conducted using a variety of hyperspectral images, indicate that the proposed framework for AFs with partial reconstruction provides state-of-the-art classification results. Compared to the methods using only single EAP and stacking all EAPs computed by existing attribute opening and closing together, the proposed framework benefits significant improvements in overall classification accuracy.
    Subject(s): partial reconstruction ; high spatial resolution ; hyperspectral data ; Shape ; Morphology ; Attribute profiles (APs) ; Feature extraction ; classification ; Image reconstruction ; Hyperspectral imaging ; Research ; Image processing ; Remote sensing ; Geographic information systems ; Engineering Sciences ; Signal and Image processing
    ISSN: 0196-2892
    E-ISSN: 1558-0644
    Source: IEEE Electronic Library (IEL)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Language: English
    In: Sensors (Basel, Switzerland), 2019-08-28, Vol.19 (17), p.3727
    Description: Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the benefits of the thermal cameras in limited visibility conditions. In this paper, we propose a learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions. The goal is to create natural, intuitive images that would be more informative than a regular RGB camera to a human driver in challenging visibility conditions. The main novelty of this paper is the idea to rely on two types of objective functions for optimization: a similarity metric between the RGB input and the fused output to achieve natural image appearance; and an auxiliary pedestrian detection error to help defining relevant features of the human appearance and blending them into the output. We train a convolutional neural network using image samples from variable conditions (day and night) so that the network learns the appearance of humans in the different modalities and creates more robust results applicable in realistic situations. Our experiments show that the visibility of pedestrians is noticeably improved especially in dark regions and at night. Compared to existing methods we can better learn context and define fusion rules that focus on the pedestrian appearance, while that is not guaranteed with methods that focus on low-level image quality metrics.
    Subject(s): Neural Networks, Computer ; Pedestrians ; Algorithms ; Image Processing, Computer-Assisted ; Vision, Ocular ; Humans ; Lighting ; Automobile Driving ; Index Medicus ; fusion ; deep learning ; infrared ; visible ; ADAS ; pedestrian detection
    ISSN: 1424-8220
    E-ISSN: 1424-8220
    Source: Academic Search Ultimate
    Source: PubMed Central
    Source: Directory of Open Access Journals
    Source: Alma/SFX Local Collection
    Source: © ProQuest LLC All rights reserved〈img src="https://exlibris-pub.s3.amazonaws.com/PQ_Logo.jpg" style="vertical-align:middle;margin-left:7px"〉
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Language: English
    In: Sensors (Basel, Switzerland), 2020-04-29, Vol.20 (9), p.2513
    Description: With the rapid development in sensing technology, data mining, and machine learning fields for human health monitoring, it became possible to enable monitoring of personal motion and vital signs in a manner that minimizes the disruption of an individual's daily routine and assist individuals with difficulties to live independently at home. A primary difficulty that researchers confront is acquiring an adequate amount of labeled data for model training and validation purposes. Therefore, activity discovery handles the problem that activity labels are not available using approaches based on sequence mining and clustering. In this paper, we introduce an unsupervised method for discovering activities from a network of motion detectors in a smart home setting. First, we present an intra-day clustering algorithm to find frequent sequential patterns within a day. As a second step, we present an inter-day clustering algorithm to find the common frequent patterns between days. Furthermore, we refine the patterns to have more compressed and defined cluster characterizations. Finally, we track the occurrences of various regular routines to monitor the functional health in an individual's patterns and lifestyle. We evaluate our methods on two public data sets captured in real-life settings from two apartments during seven-month and three-month periods.
    Subject(s): Index Medicus ; clustering ; health monitoring ; smart homes ; human activity discovery ; sequence mining ; unsupervised learning
    ISSN: 1424-8220
    E-ISSN: 1424-8220
    Source: Academic Search Ultimate
    Source: PubMed Central
    Source: Directory of Open Access Journals
    Source: Alma/SFX Local Collection
    Source: © ProQuest LLC All rights reserved〈img src="https://exlibris-pub.s3.amazonaws.com/PQ_Logo.jpg" style="vertical-align:middle;margin-left:7px"〉
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Language: English
    In: Sensors (Basel, Switzerland), 2018-12-21, Vol.19 (1), p.23
    Description: In this paper, we present a complete loop detection and correction system developed for data originating from lidar scanners. Regarding detection, we propose a combination of a global point cloud matcher with a novel registration algorithm to determine loop candidates in a highly effective way. The registration method can deal with point clouds that are largely deviating in orientation while improving the efficiency over existing techniques. In addition, we accelerated the computation of the global point cloud matcher by a factor of 2⁻4, exploiting the GPU to its maximum. Experiments demonstrated that our combined approach more reliably detects loops in lidar data compared to other point cloud matchers as it leads to better precision⁻recall trade-offs: for nearly 100% recall, we gain up to 7% in precision. Finally, we present a novel loop correction algorithm that leads to an improvement by a factor of 2 on the average and median pose error, while at the same time only requires a handful of seconds to complete.
    Subject(s): point clouds ; loop detection ; lidar
    ISSN: 1424-8220
    E-ISSN: 1424-8220
    Source: Academic Search Ultimate
    Source: PubMed Central
    Source: Directory of Open Access Journals
    Source: Alma/SFX Local Collection
    Source: © ProQuest LLC All rights reserved〈img src="https://exlibris-pub.s3.amazonaws.com/PQ_Logo.jpg" style="vertical-align:middle;margin-left:7px"〉
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    Language: English
    In: Sensors (Basel, Switzerland), 2020-08-26, Vol.20 (17), p.4817
    Description: This paper presents a vulnerable road user (VRU) tracking algorithm capable of handling noisy and missing detections from heterogeneous sensors. We propose a cooperative fusion algorithm for matching and reinforcing of radar and camera detections using their proximity and positional uncertainty. The belief in the existence and position of objects is then maximized by temporal integration of fused detections by a multi-object tracker. By switching between observation models, the tracker adapts to the detection noise characteristics making it robust to individual sensor failures. The main novelty of this paper is an improved imputation sampling function for updating the state when detections are missing. The proposed function uses a likelihood without association that is conditioned on the sensor information instead of the sensor model. The benefits of the proposed solution are two-fold: firstly, particle updates become computationally tractable and secondly, the problem of imputing samples from a state which is predicted without an associated detection is bypassed. Experimental evaluation shows a significant improvement in both detection and tracking performance over multiple control algorithms. In low light situations, the cooperative fusion outperforms intermediate fusion by as much as 30%, while increases in tracking performance are most significant in complex traffic scenes.
    Subject(s): Index Medicus ; multi-object tracking ; switching observation model ; cooperative sensor fusion ; particle filter ; people tracking ; multiple imputations
    ISSN: 1424-8220
    E-ISSN: 1424-8220
    Source: Academic Search Ultimate
    Source: PubMed Central
    Source: Directory of Open Access Journals
    Source: Alma/SFX Local Collection
    Source: © ProQuest LLC All rights reserved〈img src="https://exlibris-pub.s3.amazonaws.com/PQ_Logo.jpg" style="vertical-align:middle;margin-left:7px"〉
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    Language: English
    In: IEEE journal of selected topics in applied earth observations and remote sensing, 2014-06, Vol.7 (6), p.2405-2418
    Description: The 2013 Data Fusion Contest organized by the Data Fusion Technical Committee (DFTC) of the IEEE Geoscience and Remote Sensing Society aimed at investigating the synergistic use of hyperspectral and Light Detection And Ranging (LiDAR) data. The data sets distributed to the participants during the Contest, a hyperspectral imagery and the corresponding LiDAR-derived digital surface model (DSM), were acquired by the NSF-funded Center for Airborne Laser Mapping over the University of Houston campus and its neighboring area in the summer of 2012. This paper highlights the two awarded research contributions, which investigated different approaches for the fusion of hyperspectral and LiDAR data, including a combined unsupervised and supervised classification scheme, and a graph-based method for the fusion of spectral, spatial, and elevation information.
    Subject(s): VHR imagery ; Laser radar ; Light Detection And Ranging (LiDAR) ; urban ; Data integration ; Vegetation mapping ; hyperspectral ; Feature extraction ; Data fusion ; multi-modal ; Hyperspectral imaging
    ISSN: 1939-1404
    E-ISSN: 2151-1535
    Source: IEEE Electronic Library (IEL)
    Source: Directory of Open Access Journals
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 9
    Language: English
    In: IEEE geoscience and remote sensing letters, 2016-05, Vol.13 (5), p.686-690
    Description: Dimensionality reduction (DR) is an important and helpful preprocessing step for hyperspectral image (HSI) classification. Recently, sparse graph embedding (SGE) has been widely used in the DR of HSIs. SGE explores the sparsity of the HSI data and can achieve good results. However, in most cases, locality is more important than sparsity when learning the features of the data. In this letter, we propose an extended SGE method: the weighted sparse graph based DR (WSGDR) method for HSIs. WSGDR explicitly encourages the sparse coding to be local and pays more attention to those training pixels that are more similar to the test pixel in representing the test pixel. Furthermore, WSGDR can offer data-adaptive neighborhoods, which results in the proposed method being more robust to noise. The proposed method was tested on two widely used HSI data sets, and the results suggest that WSGDR obtains sparser representation results. Furthermore, the experimental results also confirm the superiority of the proposed WSGDR method over the other state-of-the-art DR methods.
    Subject(s): hyperspectral image (HSI) ; Collaboration ; Encoding ; Robustness ; sparse graph embedding (SGE) ; nearest neighbor graph ; weighted sparse coding ; Hyperspectral imaging ; Dimensionality reduction (DR)
    ISSN: 1545-598X
    E-ISSN: 1558-0571
    Source: IEEE Electronic Library (IEL)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 10
    Language: English
    In: Sensors (Basel, Switzerland), 2016-11-16, Vol.16 (11), p.1923
    Description: In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m 2 . To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions.
    Subject(s): Iterative Closest Point (ICP) ; 3D point cloud registration ; Ladybug ; surface reconstruction ; LiDAR scanning ; loop closure ; Velodyne
    ISSN: 1424-8220
    E-ISSN: 1424-8220
    Source: Academic Search Ultimate
    Source: PubMed Central
    Source: Directory of Open Access Journals
    Source: Alma/SFX Local Collection
    Source: © ProQuest LLC All rights reserved〈img src="https://exlibris-pub.s3.amazonaws.com/PQ_Logo.jpg" style="vertical-align:middle;margin-left:7px"〉
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...