Image Processing
Dec97-09 Senior Design Project

[Team Home] [Team Members] [Client Info] [Faculty Advisor] [Project Plan] [Design Review]
[
Final Project Report] [Progress Reports] [List of Dec97 Teams] [Senior Design Homepage]


Final Project Report

Title of Project
Date (of draft; of formal design review, of presentation to client)
Problem Statement and/or Design Objectives
Proposed Technical Solution
Budget (all important resources; expenditures to date; forecast to project completion)
Schedule (update original Gantt chart; discuss deviations from project schedule)
Review of Prior Art
Client, Faculty Advisor, Team Members (with address, phone, e-mail, etc.)
Other comments as appropriate


Image Processing

December 4, 1997

 

Team Members: Chia Jit Lim (jitleia@iastate.edu)

  • Chung-Jen Hii (cjhii@iastate.edu)

    Umar Affan (guess119@iastate.edu)

     

  • Problem Statement

    The objective of this project team is to develop image processing capability for optical images obtained by visual cameras which use both regular and infrared films and/or digital camera. Methods for obtaining quantitative information from the images have to be developed. These results will then be used routinely by researchers in observing the growth of crops and animal habitats specimen in a certain area.

     

    Design Description

    A balloon with a payload is launched to reach its altitude at about eighty thousand feet from the earth surface. The payload contains cameras, microcontroller, GPS receiver, antenna and some other components. In here, the camera is designated to take photographs of the ground at different altitude and time. In order to get a better image resolution, an infrared film and visual film are loaded to two cameras. The two cameras must be aligned and pointed at the same direction and triggered at the same time; so that, the photographs taken with infrared film is exactly the same as the ones taken with visual film. Then, comparison can be made between the two different kinds of photographs for image color calibration.

    At low altitude when the balloon rises at high speed, images are captured for shorter time interval compared to the higher altitude. It is because at high altitude the aerial images do not differ much. In this project, there are three different approaches used to determine the triggering time. The first approach is barely realistic in which the triggering time is determined as a function of height based on assumption. The second approach employs the real altitude concept based on the GPS data received. The third approach enables the operator to control the triggering by using a remote control sent from the ground.

    To enable two cameras pointing at the same direction, we come up with a conceptual design in which the light comes into the objective lens are reflected to two ocular lenses where the cameras are positioned. Besides that, we also ensure that both cameras are tightened well. With two cameras in good mechanical alignment, we can then compare the images produced by the visual film and those by the infrared film pixel by pixel. Hence, the combination of spectral information from both visual and infrared images can be obtained, and the color calibration can be improved.

    For next semester, we plan to experiment on the spectral plotting by applying spatial-spectrum analysis and statistical analysis. Optimal image filters will be done as well by using Matlab application. In addition, Image enhancement and color calibration for color stability against the change with variant reflection will be accomplished. Finally, we will apply new algorithm for the image enhancement and compare it with the conventional one.

     

    Technical Solution

    Overview

    First of all, Figure 1 below describes how the area covered by the camera at certain altitude depends on time. Here, we assume the camera has a field of view of 40 degrees and balloon ascends at a constant speed of 1000 ft/min. Equation 1 below gives the relation between height of balloon and the width covered by the field of view of the camera.

    w = 2*h*tan(20) ---------- 1

     

     

    Figure 1 Width Covered By the View of the Camera

     

    We compute this equation using MATLAB and calculate the width covered as the balloon rises. These values are shown in Table 1 in the Appendix A, and we also plot out the width versus height in Graph 1 in the Appendix A.

    Next, we need to determine the time interval to take photographs. A few assumptions have been made to reach the solution:

    1. The balloon rises at a constant speed of 1000 ft/min.
    2. Thirty-six exposure film is used.
    3. Pictures are taken in the range between 300 feet to 10,000 feet, and more pictures are taken at lower altitude.

    With these assumptions, we use MATLAB to calculate the time for each picture taken. We start with the 300 feet and each time we add an increment of 15*n (with n=1,2,…….35,36) to the previous altitude. The last picture is taken around 10,000 feet. The results are shown in Table 2 and plotted in Graph 2 in the appendix. Then these values are compared with Table 1 values, and the exact time when each picture is taken is produced. The results are shown in Table 3 and graphed in Graph 3 in the appendix.

    This approach will be tried in our first mission (scheduled on March 22, 1997). There are two other alternative solutions: first is to program our triggering control based on GPS data for the altitude. This approach is more accurate than our first approach because we can use actual GPS data instead of the results based on our assumption. The main problem of this approach is to design a circuit to receive and process the data from the GPS satellite. The other alternative solution is to use a remote control to trigger the cameras from the ground. This enables us to take pictures at desired altitude based on the GPS data received.

    In the later mission, when we have the digital camera install in the payload, we can control the triggering of the visual camera from the ground with the data sent back from the digital camera (The transmitting of the image data and the controlling channel for the trigger are developed by another HABET project team).

    The second task is the mechanical alignment, which is to ensure both the regular camera and the infrared camera will snap pictures over the same spot simultaneously.

    Camera alignment is very important for data analysis. We can get a lot of information out from a single camera (visual camera). But we need information that cannot be obtain through the visual one, so we need other camera (digital or infrared) to get additional information but still we cannot exclude the visual one. We discussed the used of the infrared images in the color calibration section. This is where the camera alignment comes in, with the two cameras aligned (visual and infrared) we can extract information from both infrared and visual images.

    Before that, we have a few assumptions to make:

    1. The area covered by both cameras through their lenses are exactly the same.
    2. Both cameras have same exposure time.
    3. Both cameras have to be stationary (loose cameras can effect in a few degrees deviate from the targeted area).

    We have to make sure both cameras will take pictures of the same area simultaneously. This task, however, is not done for this semester because in our first mission, we only use the visual camera to take the pictures. Later in next semester, we will use both visual and infrared camera to take the picture and mechanical alignment will be accomplished.

    The solution is shown in Figure 2 below. An external lens is designed to connect both the cameras’ lenses together. This will ensure both cameras take the same picture all the time. The merit of this design is that we do not have to align both camera to cover the exact area (which is the hardest to achieve); we just have to make sure that both cameras stay parallel all the time because the single external lens will cover the same area all the time. And the reflecting mirror (or prism) has to be set to 45 degree in order to have the same picture at the lens of the second camera.

     

     

     

    Figure 2 External Lens Connecting Both Cameras

     

    We abandon this alternative because that we the design has to use a lot of space(due to the height of the design) and we only have limited space to fit in two cameras and micro-controller for triggering and also limited time.

    Since then, we used an easier approach. In our payload, we design it to be able to do camera alignment. We have the two cameras aligned in parallel position and laid flat on the glass. With this, we have the two sides of our images taken aligned, the only problem with this kind of alignment is that the top/bottom part off by a little bit but we can correct that when we process the images.

    We didn't put the two cameras aligned to take images at the same area. This is because we have to focus both the cameras to the same point, and due to altitude varies during the mission, which causes different focus points every time, it's our advantages to have both cameras aligned flat on the glass.

    Image calibration is the third phase, which is also the last period for our Spring semester project. Color calibration refers to the process of changing the existing color into real one based on the reference color. There are several reasons for color calibration. The first reason is due to the high altitude at where the images are taken. The color of the images differ from that taken at the near-ground and that taken from the high altitude, for example 8,000 feet. This is due to the different light intensity and reflection rate at various altitude. Also the images’ color differs in different weather conditions -- sunny, cloudy, rainy or even foggy days, will gives different color intensity. Besides, the color will change due to the aging; the color of the images will fade if they are kept for long period of time.

    Therefore, color calibration is a very important phase in our project to obtain good quality aerial images. After getting and developing both the visual and infrared photos from a particular mission, we will scan them into the computer and digitize them by using Matlab or any other image processing application software, such as HAPPY developed in ISU. After digitization, the images can now be processed.

     

     

     

     

    Figure 3 Decomposition of Original Image

     

    Actually, each of the images consists of pixels. With the original image, we can decompose it into Red (R), Blue (B), and Green (G) components (Refer to Figure 3 for illustration). For instance in Matlab, the R component can be broke down to 222 x 360 pixels. The position of each pixel is given by coordinate (x,y), and it carries 8 bits of information. So the information ranges from 0 to 255, and "0" represents pure black color and "255" represents pure white color. With this information, we can use Matlab program to analyze the spectra at each pixel, and plot the intensity of the color as a function of wavelength (Figure 4).

     

     

     

     

     

     

     

     

     

     

     

    Figure 4 Graph of Intensity versus Wavelength

     

    With good mechanical alignment of the cameras, we can compare the images obtained by visual film and those by the infrared film. However, we need to have a set of source images, preferably taken at a lower altitude, which is clearer and contains lots of information, as our reference color. Then we can map the images taken in other mission (refer to as test images) to our reference images. The test images, taken from high altitude might be unclear; so we need to compare it to the source images, which are taken at lower altitude and have higher resolution, to obtain as much information as possible from the blur test images. For instance, the test image covers a wide range of area, which only contains a small common area in the center with that of the source image (Refer to Figure 5 for illustration); so that, small common area can be enlarged to the same size of the source image. Only with this alignment of images will we be able to compare the two images (test and source images) pixel by pixel.

     

     

     

     

    Figure 5 Image Enlargement and Alignment

     

    Proper alignment also allows quality calibration to be carried out. However, the main task in this process is to get a good set of calibration scheme. With good reference color, we can change the existing color to some other color (real color) based on the reference one. For instance, if the reference image have the R, G, B components given by "r" value at point P(x,y), and the test image have R, G, B values given by "t" at the same point, then we can adjust the "t" to "r" values to obtain the real color at that point (Figure 6). By doing this for other points, we will get a new sets of R, G, B, components, and we can combine them to get a better image.

     

     

     

     

     

     

     

     

     

     

     

    Figure 6 Color Calibration

     

     

    Preparation for HABET Mission

     

    Every time when there is a HABET mission launch, enormous resources are needed to ensure a successful and smooth mission. That applies to our image processing team as well. We have to make sure that our visual payload is ready to go, and can take good images when we fly it. Less to say if there is any special request from other HABET team that requires our co-operation and modification in visual payload for specific purposes.

     

    1) Payload Design

    We are the first team to work on image processing for HABET. Our first visual payload was built over the spring break 1997 to prepare for the HABET mission scheduled on March 22, 1997. The visual payload has a dimension of 10 inches wide by 10 inches long and 8 inches high. The payload is designed to accommodate more than two cameras, and it has windows for bottom view, top view and side view. The bottom window is for the aerial photographs; the top one is to capture the change in balloon (size); and the side window is to capture the beautiful view of the curvature of the Earth. The internal of the payload is designed so that the camera will be stationary throughout the flight. Also, space is provided to place the triggering controller circuit. The external and internal design of the payload is depicted in Appendix B along with its measurements.

    Due to the loss of the visual payload during a mission in summer 1997, we have to rebuild our payload in September. From pass experience with the visual payload, we decided to design one that will accommodate two side-by-side visual cameras facing downward taking the aerial ground images. With this design, only slight modification needed for future mission should we decide to incorporate infrared camera and digital camera. Basically the dimension of the payload is similar to the first one. However, the bottom window is made wider to accommodate two cameras (either visual, infrared or digital camera). The design of the payload is shown in Appendix C.

     

    2) Timing Schedule

    A timing schedule is needed to determine when to trigger the cameras to take pictures. Basically, the best altitude for good quality images falls in the 3,000~30,000 feet. However, it is dependent on weather conditions. Initially, each of us worked on our own timing schedule. Then our team supporter, Jooho Lee, will help to determine which of these schedules best suit our coming mission. This schedule is very important since we want to get a good set of quality pictures so that it can be used as the reference for future missions. After careful consideration, we decided to use a linear ascending range for the timing schedule since it is easier to program the micro-controller with the linear data. A timing schedule used for one of our missions is provided in Appendix D.

     

    3) Controller Circuit

    A controller is needed to trigger the camera in the payload based on the timing schedule. A simple controller circuit is built using PIC16C84, an 8-bit CMOS EEPROM microcontroller. The complete diagram of the circuit can be referred from Appendix E.

    To reduce the probability of circuit failure due to bad wire connection, and to make the circuit cleaner, the circuit is made on Printed Circuit Board (PCB). The instruction sheets for making a PCB by Injectorall Electronics Corp. is in Appendix F.

     

    4) Software Programming

    Jooho Lee taught us to program the chip, and it can be done in short amount of time. One advantage of using PIC16C84 is that we can always program the chip right before the launch. Sometimes this is required as the weather is not always predictable. The complete pin assignment for PIC16C84 is shown in Appendix G, and the program is in Appendix H.

     

     

    5) HABET/Payload Arrangement

    For this, we considered the distance for the camera to capture the full size of the balloon at the maximum height. With a maximum size of 40 feet (at the lowest pressure), the balloon has to be at least 55 feet away from camera. This is not a problem since the distance from the balloon to the main payload is over 73 feet. The next thing we concern about was the minimum distance between our payload and the main payload. We have to make sure that the main payload will not cover the view of the camera. The area of the main camera is 1.1’ by 1.0’, so we calculate the minimum distance has to be 3.2 feet. But we cannot distance our payload too far from the main payload because we want to have good resolution for the photos. With some calculation and experiment, we decided the distance of the two payload is 10 feet. Below is the sketch of the entire Balloon system for HABET Alpha-2 :

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    Figure 7 HABET ALPHA-2 Payloads Arrangement

     

     

    Image Analysis

     

    1) Magnifying an Object (Flight's Wing)

    The Habet Alpha-3 mission is done successfully and capture some good pictures from the visual payload. One of these good pictures is image no.12 in which I am going to do an object magnification. The image no-12 is shown at the following page. In this image, there is a very tiny red flight located at the left corner of the picture. My primary task is to magnify the flight's wing and to detect the code on the flight's wing as clear as possible.

    First of all, I zoom the flight's wing as large as I can so that I can view the code on the wing roughly. The code after zoom in is really unclear. In order to detect the code nicely, first I try to do focusing and sharpening of the images using one of the image application called 'O-Photo'. I adjust the brightness and contrast as well so that I can detect and see code sharply. Still, I play around with the smoothing and balancing function to get my best results. The results that I get are shown at the following page.

    Besides focusing and sharpening, I also try to edge detection to detect the edge of the code by using Matlab, Some results using edge detection are also shown at the following page. These results are not good enough since the edge detection does detect the edge of the wing very well, but not the code. It only detects partly of the code and give insufficient data to read the code. Next, I also try to do the highpass filtering because I believe it can give some edge information too. Due to the frequency of the code and its environment around are quite contrast, I sue highpass filter to filter out the low frequency and detect the high frequency only. The results are quite good for particular images. These results is also given in the following page.

    Finally, I conclude that from all the methods, focusing and sharpening give the best results. Although the final results do not give any promising answer for the code, I believe that these results are quite good enough to be read and far better than the original one. Some possibilities for the codes are 'NC 398 3H', 'NC 389 3H', or 'NC 399 3H'.

    The 3, 8 and 9 numbers have quite similar appearance and are written too close to each other, therefore it is rather difficult to distinguished among them.

     

    2) Rotational Analysis

    HABET Alpha-3 mission was launched successfully on May 18 1997. Two cameras have been installed on the visual payload: The upper camera to take the images of the helium balloon as it ascends, and the lower camera to take the aerial ground images.

    From the GPS (Global Positioning Sys) data and the ground photographs, the rotation of the visual payload is examined. The ground photographs are taken every 6 sec. After it is launched. By assuming that the rotation is in anti clockwise direction, and the rotational angle between two subsequent photographs is no greater than 2 pi, the rotational speed of the visual payload is calculated as follow.

     

     Image #

    Rotational Angle in Degree

    Radian

    Rotational Speed(rad/s)

    8

    0

    0

    0.0000

    9

    +137

    2.3911

    0.3985

    10

    +147

    2.5656

    0.4276

    11

    +112

    1.9548

    0.3258

    12

    +81

    1.4137

    0.2356

    13

    +237

    4.1364

    0.6894

    14

    +211

    3.6826

    0.6138

    15

    +273

    4.7647

    0.7941

    16

    +265

    4.6251

    0.7709

    17

    +215

    3.7525

    0.6254

    18

    +141

    2.4609

    0.4102

    19

    +106

    1.8500

    0.3083

    20

    +288

    5.0265

    0.8378

    21

    +110

    1.9199

    0.3200

    22

    +169

    2.9496

    0.4916

    23

    +299

    5.2185

    0.8698

    24

    +248

    4.3284

    0.7214

    25

    +267

    4.6600

    0.7767

     

  • Average rotational angle = 194.4706 degree

    Average rotational speed = 0.5657 rad/s

    Table1 Rotational Speed of the Visual Payload

     

  • The three dimensional graph of the flight path for the HABET Alpha-3 mission is enclosed in the following page. From the flight path, one should be able to imagine how the payload spins (anti-clockwisely) as it ascends.

     

    3) Resolution Analysis

    We have obtained valuable set of images from the recent HABET Alpha-3 mission. We have analyze these images to find the pixel resolution. The table below contain the information of each image taken at the mission. The last column of this table show the area covered by each pixel after calculated the length (distance) of each image.

     

     Image #

    Time,UTC (hours)

    Time,UTC (hhmmss)

    Altitude(m)

    Length(m), 6"

    Width(m), 4"

    Area (m^2/pixel)

    8

    16.8350

    165006

    6.6472

    4.83

    3.22

    0.0000

    9

    16.8367

    165012

    24.1472

    17.48

    11.72

    0.0004

    10

    16.8383

    165018

    45.5472

    33.23

    22.15

    0.0014

    11

    16.8400

    165024

    66.3472

    48.30

    32.20

    0.0030

    12

    16.8417

    165030

    88.6472

    64.53

    43.02

    0.0054

    13

    16.8433

    165036

    110.4472

    80.40

    53.60

    0.0083

    14

    16.8450

    165042

    134.6472

    98.02

    65.34

    0.0123

    15

    16.8467

    165048

    155.7472

    113.37

    75.58

    0.0165

    16

    16.8483

    165054

    180.6472

    131.50

    87.67

    0.0222

    17

    16.8500

    165100

    205.3472

    149.48

    99.65

    0.0287

    18

    16.8517

    165106

    229.6472

    167.17

    111.44

    0.0359

    19

    16.8533

    165112

    257.5472

    187.48

    124.98

    0.0452

    20

    16.8550

    165118

    287.6472

    209.39

    139.60

    0.0564

    21

    16.8567

    165124

    318.6472

    231.96

    154.64

    0.0692

    22

    16.8583

    165130

    342.6472

    249.43

    166.28

    0.0800

    23

    16.8600

    165136

    368.6472

    268.35

    178.90

    0.0926

    24

    16.8617

    165142

    395.1472

    287.64

    191.76

    0.1064

    25

    16.8633

    165148

    420.1472

    305.84

    203.89

    0.1202

     

    We used the equation (1) and the figure mention in figure 1 in the earlier section to calculate the distance covered by each image. On this equation, we assume that the camera has a field of view of 40 degrees. The actual image size is 6" long and 4" wide, which is 882 pixels in length and 552 pixels in width.

    Next, we want to prove that the area covered by each pixel in the table is a reasonable value. In the images, we find an object that we know the exact length of it and then compare it with the computed length. In image #12, we used the white car as our reference object. First, I zoom into the car and then calculate the number of pixels represent the length of the car. In this case the number of pixels is 44 which is 3.2208m if we used the data from the table. The actual car length is about 3,32 m long. Which means that our data obtained is very accurate.

     

     

    Color Calibration

    Our project focus for the second semester (spring semester) is Color Calibration. Since we are doing remote sensing by taking aerial photographs from high altitude balloon, we need an algorithm for color calibration against the color attenuation and reflection problems, so that natural image with true color information can be obtained.

     

    1) Background Information

    Image analysis of the photographs of high altitude balloon experiments has color attenuation and reflection problems. Attenuation is due to the energy loss by both the scattering and absortion in the objects, and topographic radiation change. The latter arises from the change of angle of incident solar radiation transmitted to Earth's surface and reflected by different objects. Different objects can affects the amount of the radiant energy that reaches the Earth’s surface and reflects to the remote sensing detector (e.g. cameras). Variant in place, date and time can affect this amount. As the balloon ascends, the reflection angle from an object and attenuation rate of the sunlight change because the conditions for taking the pictures change. Color calibration is required to get a natural image and right information from the images taken at different time and altitudes.

     

    2) Different Approaches

    Several studies suggest the solutions for the environmental attenuation problem. First, the amount of the attenuation from soil, water, vegetation, and urban area differ from one another. The relative brightness, hue, and saturation information in the area can remove the attenuation problem. Second, in some areas the difference of reflection or emission is so small that the attenuation problem makes them inseparable. Compared with image data, the remote sensing reference data that are collected for a particular place, date, and time can remove the attenuation problem.

    Third, since the Earth’s surface selectively scatters or absorbs the solar reflection, the attenuation in the visible, near-infrared, and thermal infrared channel of the spectrum is different from one another. Two cameras, one for visual color photography and the other for black and white infrared photography, are aligned to point to the same object. Infrared image data are free from scattering problems, whereas the visual one is influenced by the problem. Shifting the histograms can adjust the channels for normalization to each other. Generally, the shorter the wavelengths for each channel, the brighter is the image data. The scattering problem increases the brightness in the visible wavelengths, whereas the absorption problem decreases the brightness in the infrared channel.

    Finally, the classification of the remote sensing image data, the relationship between the brightness and the actual Earth’s surface conditions can identify the colors of objects without change over time and altitude. The actual conditions of the Earth’s surface are the factors of the solar incident angle, atmospheric conditions, the phase of the sun-Earth-detector, and topological difference of the Earth and the detector lens.

    Among these, the feasible approaches for us are the second and third approach. However, due to some technical difficulty with HABET team as a whole, we have limited launches during the semesters. Although technically we are able to install the camera to take infrared images, we will not have enough time and experience to process both the visual image and visible, near-infrared images. Therefore, we decided to focus on the second approach. From here onward, color calibration will literally mean the second approach taken.

     

    3) Color Reference

    For our color calibration, we have 2 flavors: 1) input image compared with the remote sensing reference data collected for a particular place, date, and time to remove the attenuation problem; 2) a good color plate used as the reference with which all the input image will be compared.

    The idea behind the latter one is similar to the former one. Even though the color reference plate has limited color components, it is a reliable reference. Therefore a good color reference is needed, and it will be used in all HABET missions. A color plate will be created and placed some distance from the camera. However, we have to make sure that the color plate will always be included in the images taken, and it is only in one corner of the images so that it would not block the aerial view of the earth.

    Since most of our project is taking aerial images consisting of natural color, we decided to use Red, Green, and Blue components as our color reference. To minimize the color attenuation and reflection problem, the material of the color reference must be chosen wisely. Initially, we choose three sets of RGB reference of different material: rough surface cardboard, smooth surface high-reflection paper, and rough surface paper; and put them on a square-board. We choose a square-board since we plan to attach it on top of the main payload, which will be right below our visual payload.

    Unfortunately, we never have the chance to fly our payload and try our color reference plate. However, we did do some field test and inserted the color plate in all the images taken. From the field test, we realized that smooth surface and high reflection material does not work well. So we decided to find another 2 sets of material, which are cloth and felt. The rational behind this is that both of them have rough surface and they do not reflect much of the sunlight, which make them a better reference since the color will remain almost the same under different intensity and angle of incident solar radiation.

     

     

    4) Preliminary Analysis

    We have done some preliminary work on the color calibration. We try to find the same portion of area on the reference image and the chosen image. Then, we compare it pixel by pixel in this common area. This is almost an impossible task to be accomplished because it just too difficult to process pixel by pixel and the results are not quite satisfying at all. After that, we come out with four new approaches to the solution of color calibration. These approaches are to process the whole image at a time instead pixel by pixel. The four approaches are :

     

    The first approach is actually a basic step in which we decompose the color images into R,G,B (red, green, blue) components. This is because the main three components have different color intensity, so they need to be processed separately. Next, we apply histogram equalization on the three components. This approach is actually not a good approach because it only enhances the contrast and does not calibrate the color. The result of using histogram equalization is provided in Appendix D. The other approach is to pass the three different components through various kinds filters such as media, windowing, and wiener filter. The results using these filters are given in Appendix D. We find that this method is also not a good approach because the filters only smoothen out the images and still don't give any good solution to the color calibration. The next approach is to use statistical information to process the images. This part is subdivided into several methods such as mean, standard deviation, and auto-/cross-correlation (wiener optimization).

     

    Mean

    Here we use two images, one as a reference image which has good color appearance and another is our which we are going to color-calibrate. First, we find the mean values of the reference image and the input image Then we take the ratio of the two mean values and apply it to the input image. Then we take the ratio of the two mean values and apply it to the input image to get our new image (output). The following mathematical expression show the output equation.

     

    Output image = (mean of reference / mean of input) * input image

     

    We write the program in Matlab and then run it. The result is quite good and satisfying. In our result, the unnecessarily color (clouds) has been taken out by this approach, but the area previously covered by the clouds is becoming smooth in our result. In other word, the resolution of the image decreases. The result of this approach is attached in Appendix D. We also try to use various kinds of filters, but the results are still not satisfying since the filters only get rid of the noise and also give the smoothing effect which we do not want.

    Next, we take the difference between the two mean values instead of the ratio. As the result, the output image has better resolution than the ratio approach because the smooth area after the ratio process become clearer and has better resolution. The following mathematical expression shows the output equation.

     

    Output image = (mean of reference - mean of input) * input image

     

    For both methods, we know that the output values cannot exceed 1 or below 0. If the output values are below 0, we will assign them to 0; whereas, if the output values exceed 1, we will assign them to 1. The difference approach is better than the ratio approach. The only problem on the mean approach is that the output image has darker contrast than the original image.

     

    Standard Deviation

    The main purpose of using this method is to change the standard deviation of the desired image as close as the reference image's standard deviation. To do this, we have to apply the ratio between the reference image and the input image pixel by pixel. The equation we used is shown below:

    where P(x,y) = input image

    m 2= the mean of the input image

    a = the ratio between the reference and the input image.

    As usual, we apply this equation to the RGB components of the input image. After we process this equation in our program, the result we got is mostly a contrast enhancement with lots of white spots in the output image. This is due to many values in the output image exceeds the maximum value of the intensity level (which is 1 in this case), so we round all this value to 1(white). We concluded that this method is not suitable enough for color calibration.

     

    Weiner Optimization

    As usual, two images are used for the processing; one as reference image and one as the input image. The reference image is the quality image that is desired from every HABET mission. How ever as mentioned earlier, the images taken at high altitude are not always sharp and clear. But useful information cannot be retrieved unless the image is qualitatively analyzable. So color calibration comes into play.

    In Wiener optimization, we try to match the color of input image to the reference one. In other word, we try to minimize the difference between the two image:

    For e=d-wx where e = error

  • d = desired or reference

    w = coefficient

    x = input/current value

     

  • E[e2] = E[(d-wx)2}

    = E[d2-2dwx+w2x2]

     

    we want e2 to shrink to zero:

    Therefore, E[dx] and E[x2] are needed for the calculation; and actually they are cross-correlation and auto-correlation respectively. MATLAB has built in function to calculate this, i.e. XCORR(A,B).

     

     

    5) Advance Analysis

     

    Combination of Mean and Standard Deviation

    This method is actually the same as the mean and standard deviation method, which are mentioned previously. We still need the reference image that has good color and the input image, which will be processed later. Again, by calculating the mean and standard deviation of both reference and input images, we can then shift and reshape the Gaussian distribution of the input image to the reference image. The process is done for every red, green, and blue color of the reference and input images. Figure 8 shows the sample description of Gaussian distribution for both reference and input images.

     

     

     

     

     

     

     

     

    Figure 8 Sample Gaussian Distribution of Reference and Input Images

     

    From the Gaussian distribution, we try to shift the mean value of the input image to the mean value of reference image. After shifting the mean value, we adjust the shape by changing the standard deviation value of the input image to those of reference image. Figure 9 shows how the process works.

     

     

     

     

     

     

    Figure 9 Shifting and Reshaping the Gaussian Distribution of Input to Reference Image

     

    The procedure of how to shift and compress the value of images is that, first of all, we decompose the reference and input images into red, green and blue components. We calculate each mean and standard deviation value. Then, we take the difference of mean between reference and input images, which is called delta. After getting the delta values, we added them to the input values to get the shifting result. To adjust the shape of the Gaussian distribution, first we calculate all the standard deviation values, and then take the ratio between the standard deviation of reference and input images. After that, we take the ratio value to get the shaping result. Following is the algorithm of this process:

    Delta = m 1 - m 2, where m 1 and m 2 are the mean value of reference and input images respectively.

    Output = Input + Delta, where output is the input value plus the delta value.

    Ratio = STD1 / STD2, where STD1 and STD2 are the standard deviation of reference and input images.

    If Output > m 2

    Result = m 2 + | Output- m 2 | * Ratio

    If Output < m 2

    Result = m 2 - | Output- m 2 | * Ratio

    We write the program in Matlab and then run it. The results, table and Matlab program of this method are attached in Appendix J.1, J.2 and J.3 respectively. We can hardly tell the changes of the result and the input image. However, the table, which is provided at appendix E, list out all the mean and standard deviation values of these pictures. In this case, we can see that the mean and standard deviation of the output image are quite close to those of the reference image. This situation implies that we shift the mean and reshape the form of Gaussian distribution. The disadvantage of this combination is that we assigned all the negative results become zero and all values exceeding one become one. This truncation would give us a little bit black and white spot at our results. Following page is our result showing that the color calibration. Again, this method is only good if the reference image and the input image consist of almost the same type of color throughout the entire images.

     

     

    Matching Image Patterns

    What we mean by image pattern is the plot of each red, green and blue value for every pixel in the image. We, again, need the reference image and the input image. The purpose of this method is that we try to figure out the pattern of images and match each pattern of input image to the reference image. What we do is that, first of all, we decompose the images to red, green and blue components. Then, we find the pattern of reference image and the input image for every pixel. We observe the similar pattern between reference and input images which means that the slope of the line almost similar. In this case, same area pictures usually will give the similar type of patterns. After figure out the similar pattern, then we assign the input pattern directly to the output to get the result. Following graph is the example of one similar pattern.

     

     

     

     

     

     

     

     

     


    Figure 10 Example of the Similar Input and Reference Patterns

     

    In the above graph, the pattern of input image is similar to the pattern of reference image for only one pixel. We can check the similarity by calculating the slope of the pattern. Since there is too many pixels in one image, we initially process only 16x16 pixels (one block). We take a block of good color reference (image 10 of HABET 12 mission), and then search for the patterns. We also do the same searching patterns for the input image (image 11 of HABET 12 mission). In order to get patterns similarity between reference and input, we take a block of image that has common area between reference and input image. The reference image is the image taken at lower altitude than the input image. Those pictures and its result are shown in Appendix K.1. The result seems more greenish at the particular area that we assigned which is good. Besides, we also show the matching pattern of reference and input for one pixel in the graph, which is also attached in the Appendix K.2. The matching of output and reference patterns is almost close to each other, which is good.

    This process will give better results only if the two areas (blocks) that we select have almost the same pattern. If the two areas (blocks) that we are going to compare are different in pattern, then the result will be bad. The Matlab programming and the table of 16x16 pixel pattern of each red, green and blue value of reference, input and output are also provided in Appendix K.3 and K.4 respectively. The differences between red, green, and blue of reference and output are almost close which are good, because we almost have the same pattern along this region. But, the blue region always gives the farther difference than what we expect. It is because of the blue frequency that is the most unreliable among the three colors. In addition, in order to have a good color reference, it is good if the intensity of the color reference is about in the middle of the intensity region. Furthermore, by analyzing the pattern and intensity, we conclude that for the lower altitude images (usually as references), the intensity will be lower or the light absorption is less than the pictures taken at the higher altitude.

    We also try for the whole picture, but it just comes out so badly. This particular case is because the patterns for whole picture is sometimes too difficult to be match unless two photos are exactly the same We also find difficulties in running our program because too many pixels for whole picture need to be processed. The program just keeps on running for few hours and the result come out like very bright. It is because the looping command to process every pixel (222x360 for whole picture) is too big, and is also too tough to match the patterns of whole reference and input images. Besides, the intensity of one picture is greatly different to the other pictures in whole.

     

    6) Contrast Analysis

    Images taken by the visual camera have contrast problem, which is that the center of the images is the brightest and the four corners are the darkest. At this point, we want to analyze how are the levels of brightness distributed across the image. The best way to show the level of brightness is to make brightness contours out of the images.

    When making the contours, the information inside the images is not important, so we can use lowpass filter to smoothen the images, in another word - blurring the images. The filter we used to make the contour is the wiener filtering which this filter take the average value of the window assign size of the images. In this case we assigned the window size of the entire image (360x222). After the filter processed, the output images will show the contour of the image brightness. In Appendix L, attached is the program to run the contrast analysis. (Attached in the appendix M are few examples of the original images and their brightness level of images. From the output images, we can see that the output images are the brightest at the center and the contrast is darken gradually as it get further from the center, especially at the corners of the images where they are the darkest parts in the images. The images below show the contrast level for each red, green, and blue component with their original image.

    There are few exceptions where we can find some bright areas at the side of the image. This is due to that part consists of very bright objects like while building, roads etc. The images show some of the bright objects effect (with original image), refer to appendix M.

    Contrast analysis is very important for the remote sensing and also for color calibration. This is because if we know that we are color calibrating at the part to the side of the images (dark), we have to use brighter reference pattern than the one at the center of the images. As we mentioned in the pattern matching part, we tried to shift the input image's pattern (pixel by pixel) to the reference image's pattern. Since we have same multiple reference pattern but different contrast, so we can use the result from the contrast analysis to determined which pattern we chose to color calibrate. At this point, we always use the center of an input image as a reference point for the level of contrast, and we try to find a pattern from the reference that is the same contrast and try to shift the input pixel to the reference pixel.

     

    Budget

    The budget given in Table 2 is for the whole year within 20% accuracy.

     

    Item

    Unit Price

    Quantity

    Total

    Visual Camera

    $200.00

    2

    $400.00

    Digital Camera

    $1000.00

    1

    $1000.00

    Visual color film

    $4.80

    10

    $48.00

    Infrared film

    $20.00

    5

    $100.00

    * Film Development

    ** $4.00

    5

    $20.00

    Print Magnification

    N/A

    N/A

    $200.00

    Trigger Controller

    $50.00

    1

    $50.00

    Others

     

     

    $100.00

    Total

     

     

    $1918.00

  • Table 2 Two Semesters Budget

     

  • * It does not include the normal film development cost since it is done on campus and it is free of charge. The cost is only for the infrared film development.

    ** The $4.00 charge is for 36-exposure/roll printed on a single sheet of paper. Then we have to choose those that we are interested in and send for magnification to normal size picture. Therefore, the real cost, which depends on the number of magnified prints we are interested in, is an unknown amount and we estimate $200 for Print Magnification.

    On the other hand, each member is expected to spend 6-7 hours per week on this project for this semester. The total available human resource more than 270 hours ( assumed 6 hour/person-week). The estimated hours spent on each tasks are summarized as follow:

     

    Tasks

    Est. Total Hours

    Learning Matlab Toolbox

    15 hours

    Basic Camera Work

    30 hours

    Camera Alignment & Timing

    60 hours

    Color Calibration

    60 hours

    Project Poster

    24 hours

    Research

    45 hours

    Documentation & Presentation

    40 hours

    Total

    274 hours

  • Table 3 Human Effort Budget for Spring 97

     

  • For the Fall semester, we estimate to spend 5-6 hours/person-week on our project. The breakdown on each task is as follow:

     

    Tasks

    Est. Total Hours

    Spectral Plotting

    45 hours

    Image Enhancement Process

    60 hours

    Algorithms Refinement

    45 hours

    Research

    45 hours

    Documentation & Presentation

    40 hours

    Total

    235 hours

  • Table 4 Human Effort Budget for Fall 97

     

  • Risks

    Some risks which might be considered in our project are as follows:

    1. Rain, fog, wind, snow, and some other bad weather condition might have negative effect on our pictures.

    2. The thread connecting the payload and parachute might break under unseen circumstances.

    3. While landing, if the payload hit the ground too hard, the lens of the camera might break or scratch.

    4. The strong wind might cause rotating and shaking problems for our payload.

    1. The flaw in the microcontroller will disable the functioning of the cameras.
  •  

  • Project Information

  • [FrontPage Include Component]

  • Client Information 

  • Iowa Space Grant Consortium at Iowa State University
    136 Town Engineering
    Ames, IA 50010
    Tel: (515)294-2672
    URL:
    http://www.public.iastate.edu/~isgc


  • Copyright Iowa State University 1977

    [Team Homepage] [Senior Design Homepage]

  • Jooho Lee
    136 Town Engineering
    Iowa State University
    Ames, IA 50014
    Tel: (515)294-2672 (O)
    (515)292-8420 (H)
    Email: jlee@iastate.edu
    URL : http://www.public.iastate.edu/~jlee

  •  

  • Team Members

  •  

  • Chia-Jit Lim
    301 South 4th St. #12
    Ames, IA 50010
    Tel : (515)233-3851
    Email: jitleia@iastate.edu
    URL : http://www.public.iastate.edu/~jitleia

    Umar Affan
    1225 Delaware Avenue #9
    Ames, IA 50014
    Tel : (515)268-9515
    Email: guess119@iastate.edu
    URL : http://www.public.iastate.edu/~guess119

  •  

  •  


    Copyright Iowa State University 1977

    [Team Homepage] [Senior Design Homepage]