It has taken a decade or so for the wider world to get acquainted with (if not necessarily comfortable with) the "HDR" extensions of the JPEG format as an option for image storage and display. But JPEG was always - by decision - intended for low-quality "consumer" imaging devices - those which used an 8-bit resolution for their Red, Blue, and Green pixel values. During the whole of it's existence, there have been higher-quality sensors available, though typically not in "consumer"-grade equipment. As an example (raised in O'Reilley's ink-on-dead-tree "book" on the PNG image format, back when that was new and only intermittently supported), medical images constructed from MRI scanners and X-ray sensors routinely used 32-bit pixel resolutions (as the astronomical standard "FITS" file format is designed to use). Today, 12-bit and 14-bit precision detectors are an advertising feature in "consumer-grade" astronomical image sensors, and eventually that will work it's way into the wider imaging world.
One response to this has been the "High Definition Range" extensions to the JPEG image format. Which I stopped looking at when faced with choices over "colour gamut" and "encoding format", and I realised the field was in the middle of a format war - a minefield I did not wish to pogo-stick through. I have a depressing habit of picking the losing side in format wars (just ask my VL-bus SCSI interface card!)
Whether or not that format war is over, the more restrained field of technical imaging has a new entry for display of high-dymanic range images. From the point of view of point'n'click photographers of pretty pictures, it's probably not of much interest - because of the way it treats the low-intensity parts of the image - but for imagery with both high brightness objects (e.g., stars, or a band on stage) and low-brightness objects (e.g. nebulosity surrounding the star, or people in the audience of the band) in the same image, it presents the low brightness parts of the image with a stretched brightness while retaining the colour information in the high-brightness parts of the image.
The software is provided under the GNU General Public License (version 3 or higher if you perform any modifications). I'd expect to see it appearing in astronomy equipment driver programmes in due course, and possibly migrating out into more general image processing in due course.
For image formats, comparison images are normal. In deference to Hubble, and COSTAR, they present an image of Messier 51, the "Whirlpool Galaxy" (which Hubble imaging was itself a reference to the pen and ink drawing by William Parsons from 1845 - arguably at the start of astronomical imaging). Their first image is a recent survey image, with the R, G, and B channels displayed unweighted:
Their caption describes it as a "traditional colo[u]r image, where the background regions become black."
The centres of the two interacting galaxies are saturated - whatever pixel values are recorded in the file data, the visual image does not display the tightness of the nebular condensations. There is a hint, in the black background, of "tails" of material ejected from the galaxy's interaction.
Their caption describes this as "modified weights of the channels balance to obtain a bluer image". This is how the Hubble "first light" (and COSTAR "fixed light") images were presented. There is a strong contrast betweewn the (relatively) old stars of the galactic cores, and the relatively young populations in the spiral arms.
Given the sensitivity of human visual systems, this is the sort of presentration commonly presented to the public. But it remains the result of combining images taken through red, green and blue filters, and is not the only image that could be taken of an object, given the five filter-slots typically available in an astronomical imager. The filters chosen here correspond, reasonably well, to the sensitivity of the human eye's normal three visual pigments
The careful observer will note that the yellow "interaction tails" of the galaxies are now even less visible than in the unweighted image. That is why astronomers don't smash all their image channels into one, but keep them distinct in the FITS file format, then choose how to present them in their viewer.
The paper's caption is "gray background color image; this is the default mode of astscript-color-faint-gray
. The separation between color, black, and gray regions are defined from surface brightness cuts (see text) of the G-channel (rSDSS). The use of the gray background colormap reveals diffuse low surface brightness structures that would otherwise remain unveiled."
And indeed, the "low surface brightness" of the galaxies interaction tails becomes considerably more visible. The authors also note that the "cut" between using the "colour normal" and the "inverse grey" scaling is set at a low level on the green channel.
The final image uses the normal (for astronomy) replacement of a human-compatible channel with one taken through a different astronomical narrow-band filter (this one happens to be in the red at 660 nm - which is radiation released by the Hydrogen-α energy transition, though they use it to replace the "green" human-compatible channel.
Their caption is "colo[u]r image using the Hα narrow band filter (J0660) for the intermediate (G) channel instead of rSDSS. The use of this filter reveals interesting features such as the star-forming regions that are shown in green. This also reveals some structure within the cores of the galaxies.
Another feature also intrudes - a routine problem with astronomical imaging : in the image "north-west" there is a linear feature which only appears in this channel. This is most likely a near-earth satellite which crossed the field of view while the chip was exposed through the Hα filter. Astronomical sensors are designed to be sequentiallt exposed under an external filter (R, G, B, Hα there are hundreds if not thousands in use), while consumer grade chips expose either parts of their pixel array permanently under R, G, or B filters, or alternating rows of pixels under strips of filter. Naturally, this reduces their sensitivity and pixel count by a factor of 4 - which "consumer grade" sensors can accept, but astronomers don't - hence the single-channel artefacts. Yes, this can reduce sensitivity in rapidly-changing events, but that isn't too common a problem in astronomy, and the long-established astronomical habit of taking many short-exposure shots and electronically "stacking" them mitigates this. But it is a factor astronomers take into account when planning observations. And when planning observing strategies for "Targets Of Opportunity" (TOOs) such as a gravitational wave event reported near the line of sight of a "light bucket" telescope - when the operators will slew to the TOO to take a series of pre-planned images to cover the available sensitivity and detectability gamut.
It is unlikely that this new image display format will intrude into the general public's party pictures any time soon. But astronomers, people reading astronomy papers, and possibly the surveillance industry, are likely to see it more frequently. The built in ability to customise the "cut-off" level in both channel and intensity will take some getting used to.
It strikes me - this is not unlike the visual effect called "solarisation" - which has been played with by artistic photographers since the late 19th century until the death of darkrooms in the late 1980s - and may still be in use by some artistic photographers to this day. They might like to play with this too.
No comments:
Post a Comment
Please add any useful comments you have. Some HTML allowed. All comments go through moderation (because : spambots).