Model-based sensor fusion for aviation

Misha Pavel, Ravi K. Sharma

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

We describe a sensor fusion algorithm based on a set of simple assumptions about the relationship among the sensors. Under these assumptions we estimate the common signal in each sensor, and the optimal fusion is then approximated by a weighted sum of the common component in each sensor output at each pixel. We then examine a variety of techniques to map the sensor signals onto perceptual dimensions, such that the human operator can benefit from the enhanced fused image, and simultaneously, be able to identify the source of the information. We examine several color mapping schemes.

Original languageEnglish (US)
Title of host publicationProceedings of SPIE - The International Society for Optical Engineering
Pages169-176
Number of pages8
Volume3088
DOIs
StatePublished - 1997
EventEnhanced and Synthetic Vision 1997 - Orlando, FL, United States
Duration: Apr 21 1997Apr 21 1997

Other

OtherEnhanced and Synthetic Vision 1997
CountryUnited States
CityOrlando, FL
Period4/21/974/21/97

Fingerprint

Sensor Fusion
Aviation
multisensor fusion
aeronautics
Fusion reactions
Model-based
Sensor
sensors
Sensors
Weighted Sums
Fusion
fusion
Pixel
pixels
color
operators
Pixels
output
Output
Color

Keywords

  • Color fusion
  • Color mapping
  • Color vision
  • Enhanced vision
  • Image fusion
  • Multisensor fusion
  • Sensor fusion

ASJC Scopus subject areas

  • Applied Mathematics
  • Computer Science Applications
  • Electrical and Electronic Engineering
  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics

Cite this

Pavel, M., & Sharma, R. K. (1997). Model-based sensor fusion for aviation. In Proceedings of SPIE - The International Society for Optical Engineering (Vol. 3088, pp. 169-176) https://doi.org/10.1117/12.277231

Model-based sensor fusion for aviation. / Pavel, Misha; Sharma, Ravi K.

Proceedings of SPIE - The International Society for Optical Engineering. Vol. 3088 1997. p. 169-176.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Pavel, M & Sharma, RK 1997, Model-based sensor fusion for aviation. in Proceedings of SPIE - The International Society for Optical Engineering. vol. 3088, pp. 169-176, Enhanced and Synthetic Vision 1997, Orlando, FL, United States, 4/21/97. https://doi.org/10.1117/12.277231
Pavel M, Sharma RK. Model-based sensor fusion for aviation. In Proceedings of SPIE - The International Society for Optical Engineering. Vol. 3088. 1997. p. 169-176 https://doi.org/10.1117/12.277231
Pavel, Misha ; Sharma, Ravi K. / Model-based sensor fusion for aviation. Proceedings of SPIE - The International Society for Optical Engineering. Vol. 3088 1997. pp. 169-176
@inproceedings{591922e0ddfa4c6981c54eb2340753a0,
title = "Model-based sensor fusion for aviation",
abstract = "We describe a sensor fusion algorithm based on a set of simple assumptions about the relationship among the sensors. Under these assumptions we estimate the common signal in each sensor, and the optimal fusion is then approximated by a weighted sum of the common component in each sensor output at each pixel. We then examine a variety of techniques to map the sensor signals onto perceptual dimensions, such that the human operator can benefit from the enhanced fused image, and simultaneously, be able to identify the source of the information. We examine several color mapping schemes.",
keywords = "Color fusion, Color mapping, Color vision, Enhanced vision, Image fusion, Multisensor fusion, Sensor fusion",
author = "Misha Pavel and Sharma, {Ravi K.}",
year = "1997",
doi = "10.1117/12.277231",
language = "English (US)",
volume = "3088",
pages = "169--176",
booktitle = "Proceedings of SPIE - The International Society for Optical Engineering",

}

TY - GEN

T1 - Model-based sensor fusion for aviation

AU - Pavel, Misha

AU - Sharma, Ravi K.

PY - 1997

Y1 - 1997

N2 - We describe a sensor fusion algorithm based on a set of simple assumptions about the relationship among the sensors. Under these assumptions we estimate the common signal in each sensor, and the optimal fusion is then approximated by a weighted sum of the common component in each sensor output at each pixel. We then examine a variety of techniques to map the sensor signals onto perceptual dimensions, such that the human operator can benefit from the enhanced fused image, and simultaneously, be able to identify the source of the information. We examine several color mapping schemes.

AB - We describe a sensor fusion algorithm based on a set of simple assumptions about the relationship among the sensors. Under these assumptions we estimate the common signal in each sensor, and the optimal fusion is then approximated by a weighted sum of the common component in each sensor output at each pixel. We then examine a variety of techniques to map the sensor signals onto perceptual dimensions, such that the human operator can benefit from the enhanced fused image, and simultaneously, be able to identify the source of the information. We examine several color mapping schemes.

KW - Color fusion

KW - Color mapping

KW - Color vision

KW - Enhanced vision

KW - Image fusion

KW - Multisensor fusion

KW - Sensor fusion

UR - http://www.scopus.com/inward/record.url?scp=0003799851&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0003799851&partnerID=8YFLogxK

U2 - 10.1117/12.277231

DO - 10.1117/12.277231

M3 - Conference contribution

VL - 3088

SP - 169

EP - 176

BT - Proceedings of SPIE - The International Society for Optical Engineering

ER -