Need for RGBs
The amount of imager data from the world's weather satellites is impressive and will increase dramatically
when new geostationary and polar-orbiting satellites come online. But it poses a challenge: figuring
out how to extract, distill, and package the data into products that are easy for forecasters to interpret
and use.
"Red, Green, Blue" or RGB processing offers a simple yet powerful solution. It consolidates
the information from different spectral channels into single products that provide more information than
any one image can provide.
RGB products have long been used in research, education, and applied fields such as land management.
For example, Landsat, an Earth resource satellite, has been observing land cover, vegetation, and water
resources to help municipal planners and developers since the early 1970s. As the availability of RGB
products continues to increase for a variety of environmental applications, including meteorological
analysis, forecasters need information on what these products provide and how to integrate them into
their operations.
Sample RGB
While grayscale images are still useful, they often cannot match the effectiveness of RGB products.
In fact, RGB products are often more useful than traditional single-channel color enhancement techniques.
Take this example-an EOS MODIS fire RGB product over the U.S. state of Georgia. (EOS stands for Earth
Observing System, MODIS for Moderate Resolution Imaging Spectroradiometer.) It's easy to distinguish
active fires in pink from smoke in blue. Recently burned areas appear dark magenta, and vegetated areas
are green. The product was constructed by combining three channels at different wavelengths. Each channel
contributes key pieces of information.
The vivid depiction of smoke depends strongly on the Channel 1, 0.63-micrometer (µm) visible image.
The black burn scars from recently burned areas come from the longer wavelength Channel 2, 0.86 µm
visible image.
Information about hotspots from intense fires comes from the Channel 7, 2.1 µm near-infrared image.
By assigning each of the three spectral channels to a different primary color and combining them into
one product, we get far more information than any single channel could provide. Note that this MODIS product is referred to as a false color RGB.
RGB products like this are routinely used in fire monitoring even though the MODIS imager was originally
intended as a non-operational research instrument.
As more next generation polar-orbiting and geostationary weather satellites are launched, RGB products will be used routinely for a large number of applications, including fire monitoring. The Suomi-NPP satellite, launched in October 2011, marks the beginning of this new era from low earth orbit.
From a geostationary perspective, the U.S.’s GOES-R satellites will complement the existing constellation of EUMETSAT’s Meteosat satellites. This will expand the coverage of RGB capabilities from Europe and Africa to the Americas and eastern Pacific. RGB products will also become more common across Asia and the Western Pacific, as countries across that region launch geostationary weather satellites with similar spectrally enhanced imagers over the next decade.
RGB Animations
This simple but useful GOES visible and infrared RGB product shows Hurricane Katrina making landfall over Mississippi.
The visible channel depicts cloud cover while the infrared channel is used to indicate cloud height.
The yellow from the visible channel shows features where the overlying cirrus is absent or thin. For
example, we can see lower-level features, such as clouds within the storm's eyewall region. The blue
shows cirrus clouds at the periphery.
Now look at this two-day loop of the hurricane. It uses the same RGB formula that we demonstrated earlier
but substitutes the shortwave infrared channel for the visible channel during nighttime.
Notice how much information is available about clouds in the low-level environment during the daytime
when the visible channel is available compared to nighttime when the infrared channels are used.
Since the infrared channel is present both day and night, we never lose sight of the storm's fundamental
development.
This satellite product which covers the tropics is distributed by NESDIS in near real-time. The next generation GOES-R ABI
(Advanced Baseline Imager) will have an expanded suite of 16 spectral channels. This will enable much improved
RGB products for depictions of cloud cover that can include additional quantitative information about composition
and evolution at higher spatial and temporal resolution.
RGB Products
This table shows some of the RGB products that are routinely produced from EUMETSAT's MSG (Meteosat Second Generation) SEVIRI imager, and Terra
and Aqua MODIS imagers. Similar products are coming online with the Suomi NPP polar-orbiting satellite, and still more will be possible with future satellites including GOES-R and the Suomi NPP follow-on polar orbiters know as JPSS (or Joint Polar Satellite System).
Click each product to see a sample image.
Note that from this point on, we will refer to RGB products simply as RGBs.
RGB Applications
This table presents the same information but from a different perspective, applications rather than
products. You can see the purposes for which RGBs are used.
Using RGBs Operationally
These are some of the questions that forecasters often ask about using RGBs. Many of the answers will
be elaborated on in other parts of the module.
Click each question for a brief response.
How easy is it to interpret RGBs?
RGBs are generally easier to use than single channel images and are often more effective at depicting
important meteorological phenomena. Although the color schemes for RGBs are generally straightforward,
it typically takes some training and experience to use them correctly. Some products are "intuitive," while
others are not and can easily be misinterpreted.
Are RGBs available in real-time? How does one access them?
RGBs are increasingly available in real-time and near-real-time via the Internet. Efforts are underway
to get RGBs into forecast offices.
Here are some commonly used RGB sites:
Who produces operational RGBs?
A number of meteorological services produce their own RGBs, following a set of best practice guidelines
developed by the World Meteorological Organization. The guidelines are intended to standardize channel
selection and color assignment for a common suite of products across international organizations. http://www.wmo.int/pages/prog/sat/documents/RGB-1_Final-Report.pdf
What are the benefits of creating your own RGBs?
It can be useful to make an RGB when you're in a unique forecasting situation for which no other
products are available. However, you need to be aware of the challenges and pitfalls of making
RGBs. In general, it is better to use standardized RGBs.
How do RGBs differ from single channel color enhancements and quantitative products?
RGB processing is one of a range of techniques developed to extract, emphasize, and optimize information
from satellite imagery.
- Grayscale images display imager information from single channels over a range
of gray shades; products are made from 256 colors
- Color displays of single channels are similar to grayscale images but the information
is displayed using a set of assigned colors, rather than gray shades, to highlight specific features
of interest, such as the colder cloud-top temperatures associated with deep convection; products
are made from 256 colors
- RGBs are generally made from three or more spectral channels or channel differences;
each is assigned to one of the three primary colors, and the final product highlights specific
feature(s); products are made from millions of colors
- Classification products depict various non-quantitative classes of phenomena,
such as cloud classifications (stratus, cirrus, and cumulus, etc.), using a color bar key
- Quantitative products depict physical quantities, such as sea surface temperature
and total precipitable water content, in various colors using a graded color bar; for more information,
see the COMET module "Creating Meteorological Products from Satellite
Data" at https://www.meted.ucar.edu/training_module.php?id=485
About the Module
This module provides an overview of meteorological and environmental RGBs: how they are constructed
and how to use them.
The RGB development process is described in the context of two RGBs: natural color and dust. This is
followed by a discussion of the future of RGBs when most geostationary and polar-orbiting satellites
will have far more channels than they do today.
The second half of the module, the "Applications" section, focuses on the use of RGBs and
provides examples, interpretation exercises, and background information for many of the commonly used
products.
The module is written for operational forecasters, meteorology and remote sensing students from the undergraduate
level on up, scientists, and all others who rely on satellite products for environmental information.
RGB Color Model, 1
- The RGB color model is used to produce the colors in electronic devices
- Has three primary colors: red, green, and blue
- Combining them produces the secondary colors (yellow, magenta, cyan), grays, black, white
Before exploring the RGB construction process, it's helpful to know a little about the RGB color model.
Various models are used to describe colors, with the RGB color model being the one used to produce the
colors that we see in electronic devices such as televisions and computer monitors.
The RGB model has three primary colors: red, green, and blue. By combining them in various ways, we
get a broad array of colors, from the secondary colors of yellow, magenta, and cyan, to the grays, black,
and white. Understanding how these colors are made is important for producing and interpreting RGB products,
so we'll take a few minutes to explore the process.
Click each colored box in the graphic. A graphic will display, with numbers indicating
the contribution of red, green, and blue to the color. The numbers range from 0 to 255, representing the
intensity of the color.
RGB Color Model, 2
Complete the following statements about how the colors are produced. (Use the selection
box to choose the answer that best completes the statement.)
This diagram shows how the primary colors are combined into the secondary colors,
with white at the very center. In summary:
- The primary colors are red, green, and blue
- The secondary colors are:
- Yellow (including shades of yellow, orange, brown), made by mixing red and green
- Cyan, made by mixing green and blue
- Magenta, made by mixing red and blue
- Gray is made by mixing equal amounts of three colors (other than just the primary colors)
- White is made by mixing the three primary colors in maximum amounts
- Black is the absence of the primary colors
A Simple Example
Perhaps the best known RGB combination is the true color product. It highlights atmospheric and surface
features that are hard to distinguish with single channel images alone and imitates how the human eye
might see the scene. Among current weather satellites, true color products are available from the
Terra and Aqua MODIS imagers and the Suomi NPP VIIRS imager, since both have the requisite visible channels. Additional RGB capabilities will come online as new imagers are launched. Some of these include:
- The Visible Infrared Imaging Radiometer Suite (VIIRS) on U.S. JPSS (Joint Polar Satellite System) polar-orbiting satellites
- The Flexible Combined Imager (FCI) on the Meteosat Third Generation (MTG) geostationary weather satellites
- The Visible and Infra-Red Radiometer on China's FY-3 polar-orbiting satellites
- The Advanced Himawari Imager on Japan's third generation geostationary weather satellites
The true color RGB is constructed from the three visible wavelengths that correspond to the red, green
and blue components of visible light. The first spectral channel is assigned to be red, the second channel
green, and the third channel blue.
In the resulting RGB, it’s easy to distinguish the small smoke plume in Australia from the large
area of blowing dust. The suspended dust particles take on a light brownish appearance because they reflect
more light at the longer visible wavelengths, the red and green regions of the spectrum. Smoke from burning
vegetation appears gray, reflecting the red, green, and blue components of visible light in relatively
equal amounts. In general, clouds are easily separated from suspended dust in true-color images.
The small group of red pixels at the western end of the smoke plume indicates hot spots or fires. These
were inserted after the channel compositing took place, using information from the thermally sensitive
shortwave infrared channels on MODIS. Later we will discuss the 'natural color' product, which has some
similarities to true color as well as some important differences.
The Process of Building RGBs
The process of building RGBs is part science and part art, part random exploration and part methodical
experimentation. Over time, a set of best practices has been established to guide the development process.
While some good RGBs arise from random experimentation, your best chance of making a useful product
will result from following an established procedure. Even if you never experiment with RGB compositing,
being aware of the process will help you better understand and interpret the products.
The process of building an RGB has five steps.
- Step 1: Determine the purpose of the product
- Step 2: Based on experience or scientific information, select three appropriate channels or channel
derivatives (such as an inter-channel difference) that provide useful information for the product
- Step 3: Pre-process the images as needed to ensure that they provide or emphasize the most useful
information
- Step 4: Assign the three spectral channels or channel derivatives to the three RGB color components
- Step 5: Review the resulting product for appearance and effectiveness
We will examine the steps on the following pages, applying them to build two products: the natural
color RGB and the dust RGB.
Step 1: Determine the Purpose of the Product
A good RGB should convey information that would be difficult or time consuming to assess from one or
more individual satellite images. To the extent possible, the product should be unambiguous and use
intuitive colors to help highlight important meteorological and surface features.
Let's say, for example, that we want to develop a product that emphasizes features such as topography,
vegetation, low clouds, and snow cover. We want it to cover Europe using a geostationary satellite with
looping capability. This means that we will use EUMETSAT's Meteosat Second Generation (MSG) satellite
imager called SEVIRI (Spinning Enhanced Visible and InfraRed Imager). True color images are not possible since the instrument does not carry blue and green visible channels. Therefore
we will build a similar product that EUMETSAT scientists call the 'natural color' RGB.
Step 2: Select Appropriate Channels or Channel Derivatives
Move your mouse over each channel for a description.
Now that we've identified the purpose of the product, we can select the spectral channels that are sensitive
to the features that we want to highlight.
The MSG SEVIRI imager has twelve channels, more than are currently on other operational geostationary
satellites. MSG also offers the best preview of the upcoming GOES-R satellite that carries the 16-channel ABI imager.
To begin, we'll take a quick tour through the MSG toolbox so you know the channels that can be combined
into an RGB. Move your mouse over each channel for a description.
Selecting the Actual Channels
Now we're ready to select the channels for our product. Of the combinations below,
which would produce the 'best natural' RGB product, one that emphasizes features such as topography,
vegetation, and snow cover? (Choose the best answer.)
Note that for readability, we've removed
the micrometer symbols following each channel's wavelength.
The correct answer is D.
The 0.6 µm Vis, 0.8 µm Vis, and 1.6 µm NIR (Near InfraRed) channels represent solar wavelengths
that are effective for characterizing terrain and land cover features. Most of the other channels sense at shortwave, midwave, and longwave
infrared wavelengths, only some of which can be used to detect surface features.
Step 3: Pre-Process the Images as Needed
Before combining the images, we may need to pre-process them for visual sharpness or to bring out vital
information in a more prominent way. Pre-processing can, for example, transform an input image with a "washed
out" appearance to one with high contrast.
In our case, no pre-preprocessing is needed since the images have a high enough brightness contrast. But when we build the dust RGB later, you'll see how this step can be more complex.
Step 4: Assign Colors
To assign the spectral channels to the right primary colors, we need to know how each channel responds
to key atmospheric and surface features. From this generalized schematic, we can see that the relative
amount of reflected radiation in the three solar channels varies depending on the features observed.
The relative degree of reflectivity is about equal over some features, such as the ocean. But there are
sharp differences over others, such as ice clouds. We exploit these differences when we match a channel
with one of the three RGB colors.
Here is a summary of the relative reflectivities in our hypothetical landscape.
- Bare land, especially when dry, is strongly reflective in the 1.6 µm near-infrared channel
- Vegetated surfaces are strongly reflective in the 0.8 µm visible channel
- Water phase clouds have about the same reflectivity in all three channels
- For ice phase clouds and snow cover, reflectivity is strong
for 0.6 µm Vis and 0.8 µm Vis channels, but weak for 1.6 µm NIR channel since ice crystals reflect
poorly at that wavelength
- Ocean water is poorly reflective in all three channels
Selecting the Best RGB Combination
Let's see how we use this information to build the natural color RGB. Each of the three input channels
could be assigned to any of the three colors, which means there are six possible combinations for building the natural
color RGB. All of the combinations contain the same information since they are based on the same three
input channels. But they have radically different color schemes.
Which combination produces the most natural looking product, one in
which vegetation is green, deserts are brownish, and low clouds are white? View the products by clicking
the six View RGB product links. Then select the most natural looking product by clicking the
radio button beside it and clicking Done. Click here to review RGB
color theory.
The correct answer is F.
With its white low clouds, brownish desert, and green vegetation, combination
#6 is the most natural-looking option. We'll discuss this RGB more on the next page.
Additional Information
Here is some additional information about the natural color RGB.
Bare land (including desert) is brownish red, representing the very strong contribution
from the 1.6 µm near-infrared channel in red and the weaker contribution from the 0.8 µm
visible channel in green. There is little contribution from the 0.6 µm visible channel in blue.
Vegetation, including much of the land over Europe, is highly reflective in the 0.8 µm
visible channel, which produces the green vegetative shading in the product.
Water phase clouds are very reflective in all three channels and combine to produce
white water phase clouds.
You probably noticed that snow cover is cyan. That’s because
snow is highly reflective in the 0.8 µm and 0.6 µm visible channels. When assigned to be
green and blue, the colors combine to produce cyan. Although this is unnatural in appearance, non-intuitive
colors are common in RGBs and easy to interpret if you know the color scheme.
Ice clouds are also cyan.
Finally, water is dark because of the minimal reflection and hence contribution from all three channels.
What would happen if one of the less than optimal combinations was used for the natural color RGB? You would get used to it,
and learn to interpret it correctly. But the goal is to create products that provide useful information, and communicate quickly
to forecasters.
Dealing With Ambiguities
A nearly identical RGB is available from Terra and Aqua MODIS data. It looks similar to EUMETSAT’s natural color RGB
but uses the 2.2 µm NIR channel in place of the 1.6 µm NIR channel. The MODIS false color product has
the same color interpretation scheme and is used to identify the same features. However, it does a better
job of detecting fires. MODIS false color products are available in near real-time.
In this winter example centered on Northern California, we can differentiate the cyan cirrus cloud near
the coast from the whitish water cloud trapped in valleys over Oregon. But it’s hard to distinguish
cirrus cloud from snow cover over the mountains based on color alone. Both features are cyan because
ice crystals reflect strongly in the visible channels (which have been assigned to be green and blue)
and poorly in the near-infrared channel (which has been assigned to be red).
Ambiguous situations like this can often be resolved in various ways, for example by:
- Seeing if there is another RGB developed for the situation; in this case, it would be useful
to check the Cloud Over Snow RGB, described in the Applications section.
- Looping images to differentiate surface from atmospheric features.
- Noticing that surface features, such as snow cover, are often tied to familiar topographic features,
such as mountain ranges, while atmospheric features, such as ice clouds, are typically not. Do you see that effect in our example?
Step 1: Determine the Purpose of the Product
This natural color RGB provides a vivid depiction of northern Africa. However, it does not show the
major dust front depicted by the arrows very well. That's because the input channels, 0.6 µm Vis, 0.8 µm
Vis, and 1.6 µm NIR, and the resulting RGB fail to show dust in adequate contrast against the surface
of the Earth. Therefore, we need a separate dust RGB for observing airborne dust. As you'll see, this
is a more complex RGB than those discussed earlier.
Step 2: Select Appropriate Channels or Channel Derivatives
Here are the solar, water vapor, and longwave infrared groups of MSG channels for this daytime dust storm
over northern Africa.
Which channel group best depicts the boundary of the advancing dust front? See the area within
the ovals in the 0.6 µm Vis and 10.6 µm IR images. (Choose the best answer.)
The correct answer is C.
The longwave channels (8.7 µm infrared and longer wavelengths) do the best job since dust
contrasts with the thermal background, highlighting the dust front distinctly.
The solar channels are not the best choice here since the reflective dust tends to blend
in with the bright desert background.
The water vapor channels cannot detect dust and land features since they do
not see down to the surface boundary layer where the dust often resides.
Step 3: Pre-Process the Images as Needed
Before combining these channels into an RGB, the input images need to be processed to better highlight
features of interest.
The first image, the 10.8 µm infrared channel, needs what is called contrast stretching.
To understand why, consider how the channel detects dust layers aloft. The radiating temperature of
the surface is greater than that of the dust aloft, therefore the dust stands out against the hotter background.
But the contrast is often limited. For example, at night, the temperatures of the dust and background
surface are similar.
To make the most out of the limited contrast and really highlight the dust features, we stretch the
temperatures within a relatively narrow temperature range, as shown in the chart. The warm cutoff is
289 Kelvin (the white in the image), and the cold cutoff is 261 Kelvin (the black in the image).
The resulting image usually enhances the dust signature and provides useful information when combined
with the other inputs into the RGB.
Second Input: A Difference Image
In addition to building RGBs from single channel inputs, we can also use difference images, where the
calibrated pixel brightness temperature values of one image are subtracted from those in another image.
For dust imaging, these differences often bring out the dust signature that cannot be observed easily
on single channel images.
Therefore, we will use the 12.0 µm IR minus 10.8 µm IR brightness temperature difference (BTD) for our
second input.
The effectiveness of the BTD stems from the interaction of upwelling energy from the surface of the
Earth with the dust cloud. Infrared energy passing through a dust layer has a colder brightness temperature
at 10.8 µm than 12.0 µm because dust is more sensitive to and absorbs more energy at 10.8 µm. In effect,
dust blocks more upwelling radiation from reaching the satellite at this wavelength.
This differential sensitivity of dust leads to a positive brightness temperature difference and bright
shades in imagery.
Conversely, cirrus clouds are less sensitive to energy at 10.8 µm than 12.0 µm, which produces a negative
difference and black shades on images. This straightforward channel difference provides a powerful way
of differentiating higher clouds from dust.
By differencing the 12.0 µm IR and 10.8 µm IR channels and scaling the difference from -4 to +2 Kelvin,
we get a sharp depiction of the dust clouds on the brightness temperature difference image, which is
perfect for input into the RGB. Notice how the difference image shows the dust cloud in white.
Third Input: Another Difference Image
For our third input to the RGB, we'll use another brightness temperature difference, 10.8 µm IR minus
8.7 µm IR channels.
Both ice clouds and dust have negative brightness temperature differences in the 10.8 µm IR minus 8.7 µm
IR channel difference, making it hard to tell them apart on the resulting image.
But the channel difference, which we scale from 0 to 15 degrees, effectively distinguishes dust clouds
in black from desert (sand) surfaces, which provides vital additional information to the RGB.
Step 4: Assign Colors
The best assignment of spectral channels to colors for this RGB is:
- Red for the 12.0 µm IR minus 10.8 µm IR difference image
- Green for the 10.8 µm IR minus 8.7 µm IR image
- Blue for the 10.8 µm IR image
In the resulting RGB:
- Magenta, pink, and orange mark dust
- Reds mark thick, cirrus cloud
- Dark blue marks thin cirrus cloud
- Orange and brown mark water cloud
- The background appears in various shades of blue
Step 5: Review the Final Product
What are the dark line segments in the white box northwest of Morocco? (Choose the
best answer.)
The correct answer is C.
These are contrails comprised of thin cirrus. The detection is based on one
of the inputs (the 12.0 µm IR minus 10.8 µm IR brightness temperature difference), which enhances thin cirrus. This is an
unexpected side benefit of the dust RGB.
Using RGBs in Different Situations
So far, we have seen three ways of viewing a scene that contains dust:
- Single satellite images, such as longwave infrared images, where detecting dust depends on the thermal contrast
between dust and the surface background
- Channel differences (or BTDs, Brightness Channel Differences), which enhance the depiction of dust plumes
- An RGB that combines the inputs from the first two options into something that is easy to interpret
The real test of an RGB is whether it can perform in varied conditions. In an operational setting, RGBs are often ‘tuned’
to account for seasonal and geographical differences, as well as different satellite viewing geometries, such as high versus low latitude
views from a geostationary satellite.
Click each tab and see how well the dust product does.
These natural color and dust RGBs are from the next day. Which product does a better
job of depicting dust over water and land? (Use the selection box to choose the answer that best
completes the statement.)
Although it is difficult to see dust over land in the natural color product, it shows up vividly against the dark water background. In contrast, the dust RGB is better at depicting dust over land. Notice the dust streaks that are not apparent in the natural color product. The thing to remember is that no one RGB accomplishes its purpose all the time. You need to know when one is suitable and when to seek out an alternative product.
This dust RGB animation shows how the dust storm evolves over four days and nights. You can tell when
the sun is up since the heated land appears in warmer, bluish colors. Pinks and yellows predominate
during the night and then fade to a bluish color typical of land during the day. Thick, high clouds
are dark red while thin, high clouds are dark blue to black in appearance. Low cloud features, commonly water clouds, are shades of orange.
Dust appears as magenta.
Notice the strong system pushing through the Persian Gulf midday through the period
followed by a major dust outbreak. At what time does the dust outbreak reach the southern shore of
the Saudi Peninsula? (Choose the best answer.)
The correct answer is B.
Notice how the dust pushes offshore after it reaches the coast.
This dust RGB animation occurs over the Atlantic Ocean for nearly one week. Which
of the following are evident? (Choose all that apply.)
All three choices are correct.
Several dust plumes have arisen from specific source regions over Africa.
This dust moves over the ocean and eventually reaches the Americas.
Dust is usually found at middle and low levels of the atmosphere and is
often obscured by higher clouds on satellite products.
Note that this dust RGB can be useful for tropical cyclone forecasting over
the Atlantic Ocean. This is because dust and the dry air that contains it tend to dampen storm strength.
RGBs do not by themselves provide quantitative information. But we can get this kind of information
by overlaying derived satellite products or model data for example.
In this example, the RGB provides the location of the dust front while the model overlays provide information
about the winds associated with that front and the airmass behind it.
Advantages
Having examined several RGBs, the benefits of using them should be clear.
- They combine different channels to highlight atmospheric and surface features that are harder to distinguish
with single channel images alone; each channel usually represents a particular wavelength although
channel combinations or differences can also be used.
- RGB processing can use channels throughout the spectrum, from the visible and infrared to the passive
microwave; for this reason, RGBs are often called ‘multispectral;’ they combine information
from different wavelength regions of the electromagnetic spectrum.
- RGB technology produces intuitive, realistic-looking products that can reduce ambiguities and simplify
interpretation, making them useful for a wide range of users.
- RGBs can be overlaid with quantitative information such as NWP output, radar, and synoptic observations,
enabling far more sophisticated analysis and interpretation.
- A new generation of satellite imaging instruments is coming online; incorporating more spectral channels,
helping improve RGBs, and offering new options for users to view and analyze a variety of features and complex processes and interactions.
Limitations
Although RGBs are extremely useful, there are some limitations to be aware of. These are addressed in
the questions below. Answer each question, clicking Done to move on to the next one.
RGBs eliminate interpretation ambiguities. (Choose the best answer.)
The correct answer is False.
RGBs reduce ambiguities, but they do not always eliminate them. Just consider
the high clouds at point C and the snow cover at point A in this natural color RGB; both are cyan.
This highlights the importance of having either good interpretation skills, ancillary information, or a
different product altogether! However, the RGB product is still better than single channel images. For example,
it enables us to distinguish high clouds and snow cover (A and C) from low clouds (B).
While RGBs are designed to help identify specific features, they do not by themselves
provide quantitative information, such as cloud droplet size or snow depth. (Choose the best answer.)
The correct answer is True.
Although RGBs come with color interpretation guidelines, in general, you will
not see color bars or legends on them. That's because they are intended for general interpretation
and do not convey quantitative information or objective classifications. In contrast, classification
products are derived products that classify each pixel into various classes. In this example, each
of the 21 cloud or surface types has its own color. Unlike RGBs, classification schemes can be validated
against ground truth and judged based on their performance. Take a minute to compare the natural color RGB
with the companion classification product.
Spectral Channels
The imagers on board Suomi NPP, the upcoming GOES-R, and the future JPSS polar-orbiting satellites, have many more spectral channels than
their predecessors. This will enable the development of improved as well as new RGBs, helping to satisfy forecasters’ needs for more concise,
value-added information.
The GOES-R ABI imager will have five more channels compared to the MSG SEVIRI, allowing for an expanded suite of products. The polar-orbiting
MODIS imager has these bands, which has allowed us to preview VIIRS and GOES-R ABI capabilities. Unlike MODIS however, ABI will produce animations
of RGB imagery over the United States and most of the Western Hemisphere at frequent intervals between 30 seconds and 15 minutes.
VIIRS Capabilities
With 36 imaging channels, MODIS was designed as a research and development imager. The operational VIIRS imager on board the Suomi NPP and future JPSS
polar orbiters provides similar capabilities with 22 channels that represent 20 wavelengths. While that's fewer than MODIS, VIIRS has one channel from the
DMSP OLS heritage imager that MODIS does not have, a day-night channel (also known as the Day Night Band) that can produce images at night when there's
sufficient light from the moon or other sources.
These two images show the improvements we are getting with VIIRS. With full moonlight, it's easy to see snow cover and low clouds over the
mountainous terrain of northeastern Afghanistan. High-level cirrus cloud is also visible in shades of light blue.
VIIRS Day Night Band
This DMSP OLS visible image, taken on a moonless night, shows many city lights in South Carolina, Georgia, Alabama,
and portions of Mississippi. But most of Louisiana and Texas are dark, with the exception of the Houston
and Dallas Fort Worth areas.
The infrared image shows that the Texan city lights are obscured by thunderstorm cloud cover. They
don't appear on the visible image since there is no illumination from moonlight at this time.
Combining the two images into an RGB eliminates the need to interpret the visible and infrared images separately.
It clearly shows the thick cloud cover over Texas and western Louisiana, which obscured the cities.
Recall that the DMSP OLS imager has only two channels, visible and longwave infrared. The RGB is made by assigning
the visible channel to be red and the infrared channel to be both blue and green. This results in clouds that are cyan
and cities that are red. With the new VIIRS imager, we are now able to combine the Day Night Band with potentially 21
other channels, resulting in many new opportunities for multispectral viewing.
To learn more about the VIIRS Day Night Band and its applications, see the COMET module
"Advances in Space-Based Nighttime Visible Observation" at
https://www.meted.ucar.edu/training_module.php?id=990
Overview
This section describes many of the applications for which RGBs are used. The section is arranged by product,
with examples, interpretation exercises, and background information for each one. Use the tabs to review
the introductory tables, and then select the products that you want to learn more about from the menu.
Note that each application has two pages of information accessible via tabs at the top of the page (About
and Examples/Exercises). When you reach the bottom of the first page, be sure to scroll up and select the
second tab, rather than clicking the Next Button. The Next/Previous buttons move you between RGB applications.
This section describes many of the applications for which RGBs are used.
Summary
About RGBs:
- Generally made from three or more individual or differenced spectral channels; each is assigned to a primary
color (red, green, or blue); the final product highlights atmospheric and surface features that are hard
to distinguish with single channel images alone
- Provide intuitive, realistic looking products that can reduce ambiguities and simplify interpretation
- In some situations, different features can have the same color or the same feature can appear in different
colors. One way to handle this is to animate the products
- Can be overlaid with quantitative information, such as model data or other observational data, enabling
more sophisticated analysis and interpretation
- Are increasingly available online and in near real-time
- Future satellite imagers will have increasing numbers of spectral channels, allowing for more RGBs and
new applications
Sources of RGBs:
The process of building RGBs:
- Step 1: Determine the purpose of the product
- Step 2: Based on experience and available scientific information, select three appropriate channels or channel
derivatives that provide useful information
- Step 3: Pre-process the images as needed to ensure that they provide or emphasize the most useful information
- Step 4: Assign the three spectral channels or channel derivatives to the three RGB color components
- Step 5: Review the product for appearance and effectiveness; revise or tune as needed
Colors in the RGB color model:
- Primary colors: Red, green, and blue
- Secondary colors: Yellow (red + green), cyan (green + blue), and magenta (red + blue)
- Gray: Equal amounts of any three colors
- White: The primary colors in equal intensities
- Black: The absence of the primary colors
Uses of RGB products:
Commonly used RGB products:
You have reached the end of the module. Please consider taking the quiz and filling out the survey.