Here's the new thread
wait. let me prepare my popcorn
"It is one thing to make a picture of what a person looks like, it is another thing to make a portrait of who they are" - Paul Caponigro
Constructive Critique of my photographs is always appreciated
Now that is a question I've often pondered myself. Sometimes I think they are worse than whites.
Well, just to throw something into the mix to mess around a little:
The colour red is the fastest colour (according to physics), so when taking an image, the red hits the sensor first. By the time the image is done taking the exposure and taking in all the light and colours, the red would have been the colour exposed onto the sensor the longest, thus, oversaturated - too much red (probably not true, just throwing it out there for fun )
I ewesuwally have no problem with reds. Now a white has got to be really good, otherwise I get a red after a glass or two.
That's some odd physics there, Bits. Rather like one of the physics I subscribe to.
CC, Image editing OK.
Think this thread could be moved to f/stops so all AP members that don't look in Gear Talk learn something maybe (but then do those that look in Gear Talk look at f/stop? And do those that look in Gear Talk and f/stop have any idea on this subject? So maybe it should be in Photographic Help and Advice somewhere?)
As good as cameras are, they just can't reproduce what the human eye can see in many cases. That's where PPing can help.
Have only recently started reading a little about reds, but the light is important and reducing in the red channel a bit can help realism a bit. Not that I'm sure.
Last edited by Mark L; 08-08-2015 at 9:02pm.
Is Arthur doing some studying?
probably as true as I remember the issue from many years ago.
Think of it this way: (and here comes my long convoluted reply!!)
Many folks use a ND filter to slow down the available light(ie. block) to increase the time that the shutter speed is open.
The single biggest complaint you read from many that do this is that the resultant images may have a red cast to them.
This is not usually a fault of the filter itself, and is simply a small amount of IR(near infrared, actually) from getting to the sensor.
The reason you see this red light being captured is simply because you have (in that situation) blocked the visible light from being captured.
Of course some filters manufacturers produce their filters that minimise the red cast produced, and that's most likely due to IR blocking methods.
(some ND filter makers actually advertise this point)
Many things in the natural world seem to reflect IR light better than they do both visible and UV. UV light(the bluer end of the spectrum) is absorbed more easily(just as it does into human skin and causes cancers and so on).
That is the UV end of the spectrum is harder to capture onto most mediums(film included) and special films for this have been created.
I've been researching the UV spectrum for a number of years now, and have plans to mod one of my cameras into a UV(or full spectrum) capable type.
Of all the literature and info I've read on the topic, even if the camera is UV capable, the most difficult aspect of this genre of imagery is to eliminate the visible and red/IR end of the spectrum from UV captures.
The necessary filters for this purpose are insanely expensive too.
But the info I see read and understand is all the same, doing UV and UV only image captures require massively more exposure time because of the lack of reflectivity of UV.
I've read some of the articles on the topic of structural colours .. and to be honest I can't really understand them fully, apart from the iridescence section(which make perfectly good sense).
But in the vast majority of instances structural colour isn't a possible explanation.
The topic is probably very hard to visualise or understand, but I think a good analogy is something that we all know of in nature .. and some of the phenonmenons we can understand with respect to something as simple as many common plants which I'm pretty sure many of us have experienced in photography as well.
If you have ever seen an image of many green plants in UV light, much of the image of that plant will be black.
This is very simple to explain.
While we see it as green(due to the various bacteria or whatever) in the UV only spectrum the black is explained by a total lack of reflectance. That is a UV capable system(camera or insect such as a bee) doesnt' actually see it in any colour other than black. It doesn't reflect the UV light, it's absorbed(and prodeuces energy via photosynthesis .. etc, etc .. basic boilogy that we learned in early high school)
If we now think of the many IR images taken of the same natural world that green, which is black in the UV system is rendered white in the IR system.
Think of the very common IR images where supposedly green trees leaves are rendered white.
The IR(or if you think of it as just red) is totally reflected back, and the white indicates oversaturation or over exposure(that's why it's white!)
While I'm not saying that blue light is UV, it is at the very high end of the spectrum, with green in the middle and red at the lower end followed by near IR and IR.
But the analogy holds true. Blue light reflects less, green light a bit more and red light even more.
In terms of UV vs VIS vs IR, there may be something like 15-20 stops of difference in the capture of UV compared to IR. So where IR may require a 1/1000s to capture the light reflected, with UV for the same exposure you need something like 1 sec.
And remember the topic here is about reflected light .. not incidence light.
Where UV is a problem is in it's incidence form. That is, the filter on your sensor is stopping UV contamination of UV light being projected into it(usually from the sun).
While one solution to minimise the over exposed red effect is to obviously lower exposure, it's not the only method.
The other two methods when capturing a raw image is altering WB to minimise red sensitivity(obviously using a cooler temperature setting for WB).
And the other method I use is to lower contrast to an acceptable level.
I remember years ago when Andrew(I@M) once told me to try using a Nikon Picture Control setting known as D2X mode.
I'd purchased a 2008 model camera and he's telling me to use a 2006 model cameras contrast setting!
Of course I'd have none of that, until I did actually try it one day and the exact same image with blown out red highlight using the standard picture control in the camera came up perfectly rendered just by using this D2X mode Picture Control.
(note that Nikon Picture Controls are simply a one click method to alter the tone curve or contrast).
- - - Updated - - -
Sorry Arthur, it doesn't make sense. I, and others, have noticed that digital cameras can oversaturate reds even when the histograms (rgb histograms) say that it is not oversaturated. This often occurs with reds in sunlight, like red birds. If it was simply IR leaking in then the histogram should show it.
yep, it was a good game.
Last edited by Steve Axford; 08-08-2015 at 9:50pm.
Are you sure it isn't your monitor that is oversaturating the red channel, seeing as the histogram shows that it isn't?
My PBase site: http://www.pbase.com/lance_b
Lance makes a point.
If the histogram shows over exposure, then you should see over exposure.
If you're seeing it but the histogram isn't showing it as such, then the problem is with the display.
This can be a very common issue with wide gamut(ie. aRGB) monitors and the use of incorrect colour spaces via your software.
ps. the red sensitivity issue is real, and I once had some images (of a red rose) that show the difference when using a polariser to eliminate the high reflectance of such a red colour.
While the same is true of a white rose too, the difference in using a polariser to subdue the red was more obvious than it was for a similar but white coloured rose.
I've tried to find those images in the labyrinth that is my archive but have failed .. so I'm assuming that I deleted them as useless images of no worth.\if we ever see the sun again down here in Melb, I'll try a new sample of images.
ps. Windows 10 .. bloody POS it is. Stuffed up my sons laptop! and I had to scramble to find any mouse to do some fixing.
- - - Updated - - -
(apologies for my comings and goings, as I'm moving between two room to fix my sons laptop too)
And to be sure, saying that the sensitivity of the red(ie. the tendency to blow the red channel easily) is simply IR contamination either.
There is definitely some IR contamination in extreme situations .. such as using a 10 stop ND filter, as an example.
My use of the IR analogy is simply to indicate, or highlight the nature of the different wavelengths of light
that is, as the wavelengths progress from UV to IR, they have specific properties with respect to their sensitivity on an image sensor.
at the extremities of those wavelengths is UV and IR.
BTW: IR actually has no colour. Colourful IR images are false colour IR images!
tte difference in how sensitive many image sensors are to the various wavelengths was the point.
So I simply wanted to use a similar analogy that as we traverse from blue to red wavelengths in the visible spectrum, the light is still more sensitive in a similar manner to how it works for UV through to IR.
Of course the difference isn't going to be 20Ev but even 1/2 or more likely 1 Ev difference is enough to make the red channel blow out in a normal capture.
So the point wasn't that IR contamination affects all images, the IR filter in the sensors filter pack take care of that for us for a normal image.
The other thing I notice with (at least my images).
This red channel blow out isn't really an issue unless you have really blown out the image(ie. over exposed it too far).
Even with 2 or so Ev of over exposure in the red channel, it's very simple to recover the detail back in the reds.
(This probably also depends on the software used too tho).
And different software apply different levels of contrast and saturation as a starting point.
I reckon it is a Canon issue. Probably borne out of the need to supply National Geographic digital cameras many years ago, and we know how National Geographic love having a native in a bright Red dress, or some other bright red object in their photos. So National Geographic probably asked Canon to over-saturate the images at the RAW level, to help minimise their post processing.
My suggestion, swap to Nikon
*Rick now runs and hides*
I had always thought it was a canon problem, but after searching a bit, I find that has come up on Nikon forums and Pentax forums as well. It seems that it is a problem with all digital cameras. Arthur is probably right that it occurs with red most frequently because of IR filter leakage which makes reds in sunlight very bright.
It could also occur with blue or green, but it is less likely. Green is usually a mix of yellow and blue, so it would register on all 3 sensor colours. Pure spectral green is actually very rare, so that could explain why we rarely see the effect on green. With blue, it could occur and some people have commented on blue fairy wrens being over saturated, but I have not noticed the effect to be nearly as pronounced as with red birds. Other colours, like yellow register on two or more sensors, so are less likely to produce the effect.
Back to blue, as that should be the other colour to produce the effect. Perhaps, there are fewer examples of blue in nature than red. This seems likely though I have no references. Also, our eyes are far less sensitive to blue than to other colours, so perhaps we just don't notice it so much.
One final comment on this is about AdobeRGB conversion to sRGB. There will be a bigger loss of colour gamut on reds than there will be on blues. This will mean that on conversion from RAW, more red will be blown than blue. Of course, green would be worse again, but greens are rarely pure colours.
Another comment. I usually have my camera set to show areas that are nearly saturated as flashing. This works well for areas that are blown from white light, but very badly for areas that are blown due to oversaturation with a pure colour. The algorithm that calculates the amount of saturation gives a high weight to green then less to red and even less to blue (based on our eyes sensitivity). So, if a colour is pure, it will not show as being oversaturated on the camera display. This is more pronounced with blue, then red, then green last. The only way to check is with the rgb histogram and even that can be wrong if we are going to view the colours in sRGB and they were taken in AdobeRGB (as RAW images usually are). The solution to all this is to underexpose by one or even two stops to ensure all the red detail gets captured. Or, you can go up to your red bird with a light meter and measure the light from it directly. My red birds only hang around for that if they are dead.
I remember a very long discussion about this issue many years ago(I think before I got my D300(in 2007) so about 2007, possibly 2006.
The way to expose for all three colours correctly was to use a white balance value that was called UniWB(Uni white balance).
Some folks took the time to calibrate their cameras to a specific WB setting to produce equal exposure in all three channels using lab conditions, and from that they created a white balance setting called UniWB.
You then loaded the WB as a WB preset in your camera, and the issue of overexposing reds went away.
if you had a program to calculate the histogram in all of your images in your archives .. in general you would find that the red channel was always the most exposed part of the image.
Not necessarily over exposed, just exposed at the higher end of the histogram spectrum.
Of course you will almost certainly have images with higher blue channel histograms as well as images with higher green histograms.
But the overall and averaged summation of your image archive's histogram results will almost certainly contain higher red channel outputs.
This UniWB kind of solved this to a degree, but the images simply looked crap! basically all green. That is, the WB value of this UniWB was that the raw image was set with a very green cast.
So the idea was that you just achieved a good exposure balance across all three channels, then with your software you set WB to one that suited .. and the image would look normal(ie. not green)
The problem was tho that once you set WB to a normal setting, the red channel would increase as per a normal exposure anyhow.
On a technical level, it kind'a made sense to use UniWB, but it also meant more PP work for every image.
I tried it for a bit, but the process was simply more work for no gain.
I just learned to live with the fact that in many cases the red channel would be captured at elevated levels.
have it in mind that my chosen WB setting for PP would determine the outcome of the image's red channel.
lower contrast if the red channel was over exposed .. etc, etc
And it has to be stressed very strongly here too .. your choice of editor, and the contrast/tone curve applied to the raw image will(or should) make more of a difference to this effect than adjusting exposure to protect the red channel.
The problem with altering exposure to protect the red channel too much then introduces the issue of losing detail in the blue channel, which may not be recoverable(or harder to recover with any quality)
I reckon I've gone through every raw file editing program ever produced for the Windows environment.
They all render a raw file differently. Some are good, some are great, some are just good all rounders which assist in minimising pp efforts(my main priority for editing).
For me with my Nikon environment, the best software has been ViewNX2 for all these years. It is (technically) a crappy program, in that it's editing tools and features are so minimal or non existent .. but the editing process to produce well balanced images(as a start point) is basically a one or two click process. Of course for further editing I then send the image to another program for localised PP work(which VNX2 can't do).
The issue(of software) is one of camera profiling.
If you want perfectly exposed images all the time in every situation, then you'd need to be prepared to profile your camera for almost all lighting conditions known to photographers.
That is, and using Adobe software as the basis for the image editing software, you'd use something like the X rite colour checker suite of hardware/software.
You take an image of the passport device under the specific conditions you want to shoot in. then you soot your images as per normal.
With all the images back on the computer, your first port of call is to locate the image of the passport colour checker
create a profile for the software to work with then use this camera profile on the images you've captured.
The problem therefore is not simply that the red channel is overexposed .. it's that the contrast/tone curve used in your software is not really ideal for the conditions you've shot in.
WB is supposed to take care of this as well, but WB settings work on a more simple level compared to camera profiling.
The camera profiles created in your editor are basic types. That is the software makers create a profile that simulates a well rounded exposure for most conditions, but those lighting conditions won't be exactly the same as the ones you shoot in all the time.
as an example of what that means. Lets say (again Adobe) that you choose their camera profile called Canon 5DIII Vivid. I have to be honest I don't even know if this exists, as I use Nikon.
But I don't want to sound overly Nikoncentric either. I know of Nikon Landscape mode .. which is Adobe's way to 'simulate' Nikon's Landscape Picture Control. Nikon's Picture Controls are the same thing as Adobe's Camera Profiles in ACR/Lr etc.
The difference is that Adobe's profile called Nikon Landscape is completely different to Nikon's similarly named Picture Control. Unless Adobe were spying on Nikon when the Nikon team created the Landscape Picture Control .. they have no way of knowing under which exact lighting conditions Nikon created their contrast curve called Landscape. The end result is one of very different rendering with a view to resemble the effect.
problem is tho that using Landscape in Lr blows out the red channel much more than it does in Nikon's version in ViewNX2.
The exposure of the image prior to this tone curve adjustment is similar. The application of a 'virtual' tone curve(or contrast curve or both) is where the problem is.
My solution to the issue is that (I believe) the issue is not one of a real exposure problem, it's one of a tone curve application. Easily taken care of in my software, (obviously) except in situations of complete and monumental incompetence on my part(but those images get deleted and never referred to ever again! )
ps. no matter the colour profile you use, be that sRGB or aRGB .. on your raw files it has absolutely no bearing whatsoever .. ever!
All it does, is that it sets the file to tell your raw file viewer/editor to open the raw file in the colour space you shot in. This is easily unset anyhow in your software, and reset to any other colour space anyhow.
Over the years and taking every possible situation into account, I've learned that the best colour space to shoot in is sRGB(ie. lowest common denominator factor).
The only situation where a specifically set colour space is important is when shooting jpgs(or more importantly tiff .. but who shoots tiff?), where aRGB is the better choice but can also cause problems too(once again the lowest common denominator issue).
ps. I am slowly grinding my way through my archive of images and tagging them all. Hopefully I'll find any relevant images relating to this issue. I have found the UniWB test shots I initially started to play with, and it was 2008, but they're not relevant to the topic.
Have you tried any other software other than the one you currently use to see how the red channel is rendered?
That is, the scarlet bird image .. have you tried DPP(only as a free choice and easy to use).
Have you tried any of the Open source software which are usually based on DCRaw's rendering engine.
Not knowing what software you do use, i can almost guarantee that it will render vastly different as an initial start point, and any contrast/tone adjustments you set will have different results to what you currently use.
And easy to use DCRaw based alternative is something like RawTherapee.
I'm not trying to change your mind on your choice of software. Obviously doesn't bother me in the least.
Only to highlight how different your software will render the same image, and that this is simply due to the camera profile used to render the image(not actually the software's editing ability!).
Been interesting to follow this discourse.
A few months back I encountered a good test scene for "reds" by chance. ( Mostly because I was there and like Volkswagens )
As was my habit, the camera was set to auto white balance, which normally with the Fuji does a good job. In this instance it rendered everything too cold but white point picking in the raw file fixed that.
With the overall scene and light levels I did a quick think and applied -1.3 exposure compensation. Once the file hit the computer it needed a further -.3 to tame some highlights.
What made it an interesting exercise to me is that there are at least 4 different "reds" in the scene along with varying greens and blues, most of which are "man made" colours.
The first image is a 1/2 processed file, just WB and exposure adjustment. The 2nd image is an attempt to bring back detail in the shaded areas and to present the colours as close to what I saw as possible. It is mostly correct, the glaring deficiency is in the greens of the vegetation in the background which suffered during the shadow retrieval.
The same way as nature presents differing "reds" we get a variety of manufactured hues and surfaces and I feel in this day and age the "red" issue comes down to a combination of the cameras exposure parameters when dealing with colours, the raw conversion tool used and then refinement ability of the chosen software.
It would be wonderful to be able to replicate the above images using a variety of makes and models of camera gear and then to explore the differing editing programs available.
I reckon we would end up with some distinctly different renditions of the scene.
That was a long response, Arthur. I had hoped that this would be, at least partly, a conversation, but it seems not.
A few comments.
"The way to expose for all three colours correctly was to use a white balance value that was called UniWB(Uni white balance).
Some folks took the time to calibrate their cameras to a specific WB setting to produce equal exposure in all three channels using lab conditions, and from that they created a white balance setting called UniWB.
You then loaded the WB as a WB preset in your camera, and the issue of overexposing reds went away."
There were some strange ideas floating around in 2007.
I don't think this has anything to do with the post processing software. I have used CaptureOne and Lightroom. Neither makes a difference. From reading many other forum threads (Fred Miranda, etc), it would seem that this happens with all digital sensors. I really can't follow why you think it is post processing software, as any good software allows for adjustments which would correct it. Perhaps you have not seen the issue and don't believe that it occurs.
You say that the red channel is usually dominant. I did a quick check and I would say that this isn't true for my photos. Green is the dominant colour, by a long way. Probably followed by blue (sky is often present).
I do understand how RAW photos are stored. I was assuming that they were AdobeRGB, as they (Canon at least) don't give any wider gamut options. If they are going to be viewed as sRGB then some colour will be lost.
I have researched this extensively by now and I think my conclusions are basically correct (as stated in the previous responses).
A) Camera histogram and exposure warnings (except discrete R,G,B histograms) are inherently weak when single colours are fully exposed.
B) Red is more commonly the single colour for a variety of reasons.
- Reds in nature are much more common than true greens or blues and they are usually brighter. Almost all greens are really a mix of blue and yellow (eg frogs, birds butterflys). The most notable exception is chlorophyll, but even bright vegetation has very significant amounts of blue and red and the green is usually low luminance. It seems bright because our eyes are very sensitive to it. Blues are very rare in comparison to reds and they are rarely bright.
- We notice reds far more than blues. In other words are eyes are more sensitive to reds than they are to blues. It can occur with blue birds like Fairy Wrens.
- There may be IR filter leakage in some circumstances which may add to reds.
- - - Updated - - -
Andrew, I think your photo demonstrates that man made colours also follow the rule that reds are almost always brighter than blues. If you sample the various reds and the various blues in your photo. The reds are almost always brighter. I suspect that most man made greens are mixes of yellow and blue, though I am not sure of that.
For raw files, there is no colour space. It makes no difference to the capture of the raw data.
Obviously the only part of a raw file that is affected with colour space settings (are) the embedded jpg preview files.
If your camera has a wide gamut review screen(that is aRGB capable), then I guess that the colour space will make a difference to the way the preview image is rendered.
But in terms of capturing more of the colour gamut or more detail for certain channels in raw files .. zero effect.
You probably won't see any of the effects of altering colour space on your raw file if you choose to use C1, DxO, Adobe software and suchlike.
And by that I mean the manner in which it alters the embedded jpg file .. not the rendering of the raw on screen.
Nikon's software(in the old days CNX2 and VNX2) did alter the actual raw data .. so when you changed colour space on the raw file, the embedded jpg preview files(of which there should be 3) also are re rendered to suit.
While I have access to DPP(for testing purposes only). I haven't used it much to see how it affects Canon raw files tho.
As for viewing a sRGB file compared to an aRGB file and losing some colour, this is true only because the file you are viewing(on the screen) is a rendering of the raw file data but as a jpg or tiff type file.
As a side note on why colour space is a meaningless side issue for raw files.
If you shoot in aRGB for a tiff or jpg file type image, you obviously lose some colour data in a conversion to sRGB.
Conversely if you shoot that same image type in sRGB, you don't automatically gain more colour data in a conversion of that sRGB image into an aRGB type. The conversion process tries to accurately map the sRGB colour data into an acceptable aRGB rendering, but the actual data is false. Some strange effects can be seen if you lok hard enough.
Yet with a raw file, if you shoot in sRGB(which is all I shoot in now) not only can you convert to aRGB without any colour data loss, but also the even higher ProPhoto colour space too .. and still no data loss, or strange effects in the colour of the raw photo. The data in the raw file is completely agnostic to the notion of colour spaces.
if it weren't, then you would lose something in shooting sRGB and converting to aRGB or ProPhoto.
besides all that, the actual colour data depth lost in going from sRGB to aRGB is only in the green to blue range. red channel is largely unaffected(look at the gamut triangle areas for each colour space).
Can anybody read this?
:Srry! The green is a bit garish!
Last edited by ameerat42; 09-08-2015 at 7:20pm.