PDA

View Full Version : ok Arthur, why do reds oversaturate?



Steve Axford
08-08-2015, 7:24pm
Here's the new thread

ricktas
08-08-2015, 7:33pm
wait. let me prepare my popcorn :D

farmmax
08-08-2015, 7:57pm
Now that is a question I've often pondered myself. Sometimes I think they are worse than whites.

bitsnpieces
08-08-2015, 8:15pm
Well, just to throw something into the mix to mess around a little:

The colour red is the fastest colour (according to physics), so when taking an image, the red hits the sensor first. By the time the image is done taking the exposure and taking in all the light and colours, the red would have been the colour exposed onto the sensor the longest, thus, oversaturated - too much red (probably not true, just throwing it out there for fun :))

ameerat42
08-08-2015, 9:24pm
I ewesuwally have no problem with reds. Now a white has got to be really good, otherwise I get a red :o after a glass or two.
That's some odd physics there, Bits. Rather like one of the physics I subscribe to.:D:D

Mark L
08-08-2015, 10:21pm
Think this thread could be moved to f/stops so all AP members that don't look in Gear Talk learn something maybe (but then do those that look in Gear Talk look at f/stop? And do those that look in Gear Talk and f/stop have any idea on this subject? So maybe it should be in Photographic Help and Advice somewhere?:D)

As good as cameras are, they just can't reproduce what the human eye can see in many cases. That's where PPing can help.
Have only recently started reading a little about reds, but the light is important and reducing in the red channel a bit can help realism a bit. Not that I'm sure.:(

Steve Axford
08-08-2015, 11:12pm
Is Arthur doing some studying?

arthurking83
08-08-2015, 11:26pm
Well, just to throw something into the mix to mess around a little:

The colour red is the fastest colour (according to physics), so when taking an image, the red hits the sensor first. By the time the image is done taking the exposure and taking in all the light and colours, the red would have been the colour exposed onto the sensor the longest, thus, oversaturated - too much red (probably not true, just throwing it out there for fun :))


:th3:

probably as true as I remember the issue from many years ago.

Think of it this way: (and here comes my long convoluted reply!!)

Many folks use a ND filter to slow down the available light(ie. block) to increase the time that the shutter speed is open.
The single biggest complaint you read from many that do this is that the resultant images may have a red cast to them.

This is not usually a fault of the filter itself, and is simply a small amount of IR(near infrared, actually) from getting to the sensor.
The reason you see this red light being captured is simply because you have (in that situation) blocked the visible light from being captured.

Of course some filters manufacturers produce their filters that minimise the red cast produced, and that's most likely due to IR blocking methods.
(some ND filter makers actually advertise this point)

Many things in the natural world seem to reflect IR light better than they do both visible and UV. UV light(the bluer end of the spectrum) is absorbed more easily(just as it does into human skin and causes cancers and so on).

That is the UV end of the spectrum is harder to capture onto most mediums(film included) and special films for this have been created.
I've been researching the UV spectrum for a number of years now, and have plans to mod one of my cameras into a UV(or full spectrum) capable type.

Of all the literature and info I've read on the topic, even if the camera is UV capable, the most difficult aspect of this genre of imagery is to eliminate the visible and red/IR end of the spectrum from UV captures.
The necessary filters for this purpose are insanely expensive too.
But the info I see read and understand is all the same, doing UV and UV only image captures require massively more exposure time because of the lack of reflectivity of UV.

I've read some of the articles on the topic of structural colours .. and to be honest I can't really understand them fully, apart from the iridescence section(which make perfectly good sense).
But in the vast majority of instances structural colour isn't a possible explanation.

The topic is probably very hard to visualise or understand, but I think a good analogy is something that we all know of in nature .. and some of the phenonmenons we can understand with respect to something as simple as many common plants which I'm pretty sure many of us have experienced in photography as well.

If you have ever seen an image of many green plants in UV light, much of the image of that plant will be black.
This is very simple to explain.
While we see it as green(due to the various bacteria or whatever) in the UV only spectrum the black is explained by a total lack of reflectance. That is a UV capable system(camera or insect such as a bee) doesnt' actually see it in any colour other than black. It doesn't reflect the UV light, it's absorbed(and prodeuces energy via photosynthesis .. etc, etc .. basic boilogy that we learned in early high school)

If we now think of the many IR images taken of the same natural world that green, which is black in the UV system is rendered white in the IR system.
Think of the very common IR images where supposedly green trees leaves are rendered white.
The IR(or if you think of it as just red) is totally reflected back, and the white indicates oversaturation or over exposure(that's why it's white!)

While I'm not saying that blue light is UV, it is at the very high end of the spectrum, with green in the middle and red at the lower end followed by near IR and IR.

But the analogy holds true. Blue light reflects less, green light a bit more and red light even more.
In terms of UV vs VIS vs IR, there may be something like 15-20 stops of difference in the capture of UV compared to IR. So where IR may require a 1/1000s to capture the light reflected, with UV for the same exposure you need something like 1 sec.

And remember the topic here is about reflected light .. not incidence light.

Where UV is a problem is in it's incidence form. That is, the filter on your sensor is stopping UV contamination of UV light being projected into it(usually from the sun).

While one solution to minimise the over exposed red effect is to obviously lower exposure, it's not the only method.
The other two methods when capturing a raw image is altering WB to minimise red sensitivity(obviously using a cooler temperature setting for WB).
And the other method I use is to lower contrast to an acceptable level.

I remember years ago when Andrew(I@M) once told me to try using a Nikon Picture Control setting known as D2X mode.
I'd purchased a 2008 model camera and he's telling me to use a 2006 model cameras contrast setting!
Of course I'd have none of that, until I did actually try it one day and the exact same image with blown out red highlight using the standard picture control in the camera came up perfectly rendered just by using this D2X mode Picture Control.
(note that Nikon Picture Controls are simply a one click method to alter the tone curve or contrast).

- - - Updated - - -


Is Arthur doing some studying?

LOL! .... no watching Hawks vs West Coast .. awesome game .. and also installing Win10 on my sons laptop ... and and I've had this reply going for about an hour or so too.

Steve Axford
08-08-2015, 11:49pm
Sorry Arthur, it doesn't make sense. I, and others, have noticed that digital cameras can oversaturate reds even when the histograms (rgb histograms) say that it is not oversaturated. This often occurs with reds in sunlight, like red birds. If it was simply IR leaking in then the histogram should show it.

yep, it was a good game.

Lance B
09-08-2015, 12:09am
Are you sure it isn't your monitor that is oversaturating the red channel, seeing as the histogram shows that it isn't?

arthurking83
09-08-2015, 1:16am
Lance makes a point.

If the histogram shows over exposure, then you should see over exposure.

If you're seeing it but the histogram isn't showing it as such, then the problem is with the display.

This can be a very common issue with wide gamut(ie. aRGB) monitors and the use of incorrect colour spaces via your software.

ps. the red sensitivity issue is real, and I once had some images (of a red rose) that show the difference when using a polariser to eliminate the high reflectance of such a red colour.
While the same is true of a white rose too, the difference in using a polariser to subdue the red was more obvious than it was for a similar but white coloured rose.

I've tried to find those images in the labyrinth that is my archive but have failed .. so I'm assuming that I deleted them as useless images of no worth.\if we ever see the sun again down here in Melb, I'll try a new sample of images.



ps. Windows 10 .. bloody POS it is. Stuffed up my sons laptop! and I had to scramble to find any mouse to do some fixing.

- - - Updated - - -

(apologies for my comings and goings, as I'm moving between two room to fix my sons laptop too)

And to be sure, saying that the sensitivity of the red(ie. the tendency to blow the red channel easily) is simply IR contamination either.

There is definitely some IR contamination in extreme situations .. such as using a 10 stop ND filter, as an example.

My use of the IR analogy is simply to indicate, or highlight the nature of the different wavelengths of light

that is, as the wavelengths progress from UV to IR, they have specific properties with respect to their sensitivity on an image sensor.
at the extremities of those wavelengths is UV and IR.
BTW: IR actually has no colour. Colourful IR images are false colour IR images!

tte difference in how sensitive many image sensors are to the various wavelengths was the point.

So I simply wanted to use a similar analogy that as we traverse from blue to red wavelengths in the visible spectrum, the light is still more sensitive in a similar manner to how it works for UV through to IR.
Of course the difference isn't going to be 20Ev but even 1/2 or more likely 1 Ev difference is enough to make the red channel blow out in a normal capture.

So the point wasn't that IR contamination affects all images, the IR filter in the sensors filter pack take care of that for us for a normal image.

The other thing I notice with (at least my images).
This red channel blow out isn't really an issue unless you have really blown out the image(ie. over exposed it too far).
Even with 2 or so Ev of over exposure in the red channel, it's very simple to recover the detail back in the reds.
(This probably also depends on the software used too tho).
And different software apply different levels of contrast and saturation as a starting point.

ricktas
09-08-2015, 8:25am
I reckon it is a Canon issue. Probably borne out of the need to supply National Geographic digital cameras many years ago, and we know how National Geographic love having a native in a bright Red dress, or some other bright red object in their photos. So National Geographic probably asked Canon to over-saturate the images at the RAW level, to help minimise their post processing.

My suggestion, swap to Nikon

*Rick now runs and hides* :p

ricktas
09-08-2015, 8:27am
Think this thread could be moved to f/stops so all AP members that don't look in Gear Talk learn something maybe (but then do those that look in Gear Talk look at f/stop? And do those that look in Gear Talk and f/stop have any idea on this subject? So maybe it should be in Photographic Help and Advice somewhere?:D)

As good as cameras are, they just can't reproduce what the human eye can see in many cases. That's where PPing can help.
Have only recently started reading a little about reds, but the light is important and reducing in the red channel a bit can help realism a bit. Not that I'm sure.:(

We could move it to a new forum...every 12 hours!

Steve Axford
09-08-2015, 9:40am
I had always thought it was a canon problem, but after searching a bit, I find that has come up on Nikon forums and Pentax forums as well. It seems that it is a problem with all digital cameras. Arthur is probably right that it occurs with red most frequently because of IR filter leakage which makes reds in sunlight very bright.
It could also occur with blue or green, but it is less likely. Green is usually a mix of yellow and blue, so it would register on all 3 sensor colours. Pure spectral green is actually very rare, so that could explain why we rarely see the effect on green. With blue, it could occur and some people have commented on blue fairy wrens being over saturated, but I have not noticed the effect to be nearly as pronounced as with red birds. Other colours, like yellow register on two or more sensors, so are less likely to produce the effect.
Back to blue, as that should be the other colour to produce the effect. Perhaps, there are fewer examples of blue in nature than red. This seems likely though I have no references. Also, our eyes are far less sensitive to blue than to other colours, so perhaps we just don't notice it so much.
One final comment on this is about AdobeRGB conversion to sRGB. There will be a bigger loss of colour gamut on reds than there will be on blues. This will mean that on conversion from RAW, more red will be blown than blue. Of course, green would be worse again, but greens are rarely pure colours.

Steve Axford
09-08-2015, 11:11am
Another comment. I usually have my camera set to show areas that are nearly saturated as flashing. This works well for areas that are blown from white light, but very badly for areas that are blown due to oversaturation with a pure colour. The algorithm that calculates the amount of saturation gives a high weight to green then less to red and even less to blue (based on our eyes sensitivity). So, if a colour is pure, it will not show as being oversaturated on the camera display. This is more pronounced with blue, then red, then green last. The only way to check is with the rgb histogram and even that can be wrong if we are going to view the colours in sRGB and they were taken in AdobeRGB (as RAW images usually are). The solution to all this is to underexpose by one or even two stops to ensure all the red detail gets captured. Or, you can go up to your red bird with a light meter and measure the light from it directly. My red birds only hang around for that if they are dead.

arthurking83
09-08-2015, 4:31pm
I remember a very long discussion about this issue many years ago(I think before I got my D300(in 2007) so about 2007, possibly 2006.

The way to expose for all three colours correctly was to use a white balance value that was called UniWB(Uni white balance).
Some folks took the time to calibrate their cameras to a specific WB setting to produce equal exposure in all three channels using lab conditions, and from that they created a white balance setting called UniWB.
You then loaded the WB as a WB preset in your camera, and the issue of overexposing reds went away.

if you had a program to calculate the histogram in all of your images in your archives .. in general you would find that the red channel was always the most exposed part of the image.
Not necessarily over exposed, just exposed at the higher end of the histogram spectrum.
Of course you will almost certainly have images with higher blue channel histograms as well as images with higher green histograms.
But the overall and averaged summation of your image archive's histogram results will almost certainly contain higher red channel outputs.

This UniWB kind of solved this to a degree, but the images simply looked crap! basically all green. That is, the WB value of this UniWB was that the raw image was set with a very green cast.
So the idea was that you just achieved a good exposure balance across all three channels, then with your software you set WB to one that suited .. and the image would look normal(ie. not green)
The problem was tho that once you set WB to a normal setting, the red channel would increase as per a normal exposure anyhow.
On a technical level, it kind'a made sense to use UniWB, but it also meant more PP work for every image.

I tried it for a bit, but the process was simply more work for no gain.
I just learned to live with the fact that in many cases the red channel would be captured at elevated levels.
have it in mind that my chosen WB setting for PP would determine the outcome of the image's red channel.
lower contrast if the red channel was over exposed .. etc, etc

And it has to be stressed very strongly here too .. your choice of editor, and the contrast/tone curve applied to the raw image will(or should) make more of a difference to this effect than adjusting exposure to protect the red channel.
The problem with altering exposure to protect the red channel too much then introduces the issue of losing detail in the blue channel, which may not be recoverable(or harder to recover with any quality)

I reckon I've gone through every raw file editing program ever produced for the Windows environment. :p
They all render a raw file differently. Some are good, some are great, some are just good all rounders which assist in minimising pp efforts(my main priority for editing).
For me with my Nikon environment, the best software has been ViewNX2 for all these years. It is (technically) a crappy program, in that it's editing tools and features are so minimal or non existent .. but the editing process to produce well balanced images(as a start point) is basically a one or two click process. Of course for further editing I then send the image to another program for localised PP work(which VNX2 can't do).

The issue(of software) is one of camera profiling.
If you want perfectly exposed images all the time in every situation, then you'd need to be prepared to profile your camera for almost all lighting conditions known to photographers.
That is, and using Adobe software as the basis for the image editing software, you'd use something like the X rite colour checker suite of hardware/software.
You take an image of the passport device under the specific conditions you want to shoot in. then you soot your images as per normal.
With all the images back on the computer, your first port of call is to locate the image of the passport colour checker (http://xritephoto.com/ph_product_overview.aspx?id=1257&catid=28&action=overview)
create a profile for the software to work with then use this camera profile on the images you've captured.

The problem therefore is not simply that the red channel is overexposed .. it's that the contrast/tone curve used in your software is not really ideal for the conditions you've shot in.
WB is supposed to take care of this as well, but WB settings work on a more simple level compared to camera profiling.

The camera profiles created in your editor are basic types. That is the software makers create a profile that simulates a well rounded exposure for most conditions, but those lighting conditions won't be exactly the same as the ones you shoot in all the time.
as an example of what that means. Lets say (again Adobe) that you choose their camera profile called Canon 5DIII Vivid. I have to be honest I don't even know if this exists, as I use Nikon.
But I don't want to sound overly Nikoncentric either. I know of Nikon Landscape mode .. which is Adobe's way to 'simulate' Nikon's Landscape Picture Control. Nikon's Picture Controls are the same thing as Adobe's Camera Profiles in ACR/Lr etc.
The difference is that Adobe's profile called Nikon Landscape is completely different to Nikon's similarly named Picture Control. Unless Adobe were spying on Nikon when the Nikon team created the Landscape Picture Control .. they have no way of knowing under which exact lighting conditions Nikon created their contrast curve called Landscape. The end result is one of very different rendering with a view to resemble the effect.
problem is tho that using Landscape in Lr blows out the red channel much more than it does in Nikon's version in ViewNX2.
The exposure of the image prior to this tone curve adjustment is similar. The application of a 'virtual' tone curve(or contrast curve or both) is where the problem is.

My solution to the issue is that (I believe) the issue is not one of a real exposure problem, it's one of a tone curve application. Easily taken care of in my software, (obviously) except in situations of complete and monumental incompetence on my part(but those images get deleted and never referred to ever again! :D)

ps. no matter the colour profile you use, be that sRGB or aRGB .. on your raw files it has absolutely no bearing whatsoever .. ever!
All it does, is that it sets the file to tell your raw file viewer/editor to open the raw file in the colour space you shot in. This is easily unset anyhow in your software, and reset to any other colour space anyhow.

Over the years and taking every possible situation into account, I've learned that the best colour space to shoot in is sRGB(ie. lowest common denominator factor).

The only situation where a specifically set colour space is important is when shooting jpgs(or more importantly tiff .. but who shoots tiff?), where aRGB is the better choice but can also cause problems too(once again the lowest common denominator issue).

ps. I am slowly grinding my way through my archive of images and tagging them all. Hopefully I'll find any relevant images relating to this issue. I have found the UniWB test shots I initially started to play with, and it was 2008, but they're not relevant to the topic.
Have you tried any other software other than the one you currently use to see how the red channel is rendered?
That is, the scarlet bird image .. have you tried DPP(only as a free choice and easy to use).
Have you tried any of the Open source software which are usually based on DCRaw's rendering engine.
Not knowing what software you do use, i can almost guarantee that it will render vastly different as an initial start point, and any contrast/tone adjustments you set will have different results to what you currently use.
And easy to use DCRaw based alternative is something like RawTherapee.

I'm not trying to change your mind on your choice of software. Obviously doesn't bother me in the least.
Only to highlight how different your software will render the same image, and that this is simply due to the camera profile used to render the image(not actually the software's editing ability!).

I @ M
09-08-2015, 5:44pm
Been interesting to follow this discourse. :th3:

A few months back I encountered a good test scene for "reds" by chance. ( Mostly because I was there and like Volkswagens :D )
As was my habit, the camera was set to auto white balance, which normally with the Fuji does a good job. In this instance it rendered everything too cold but white point picking in the raw file fixed that.
With the overall scene and light levels I did a quick think and applied -1.3 exposure compensation. Once the file hit the computer it needed a further -.3 to tame some highlights.
What made it an interesting exercise to me is that there are at least 4 different "reds" in the scene along with varying greens and blues, most of which are "man made" colours.
The first image is a 1/2 processed file, just WB and exposure adjustment. The 2nd image is an attempt to bring back detail in the shaded areas and to present the colours as close to what I saw as possible. It is mostly correct, the glaring deficiency is in the greens of the vegetation in the background which suffered during the shadow retrieval.

The same way as nature presents differing "reds" we get a variety of manufactured hues and surfaces and I feel in this day and age the "red" issue comes down to a combination of the cameras exposure parameters when dealing with colours, the raw conversion tool used and then refinement ability of the chosen software.

It would be wonderful to be able to replicate the above images using a variety of makes and models of camera gear and then to explore the differing editing programs available.

I reckon we would end up with some distinctly different renditions of the scene.

https://dl.dropboxusercontent.com/u/9582534/DSCF3520s.JPG

https://dl.dropboxusercontent.com/u/9582534/DSCF3521s.JPG

Steve Axford
09-08-2015, 6:28pm
That was a long response, Arthur. I had hoped that this would be, at least partly, a conversation, but it seems not.

A few comments.

"The way to expose for all three colours correctly was to use a white balance value that was called UniWB(Uni white balance).
Some folks took the time to calibrate their cameras to a specific WB setting to produce equal exposure in all three channels using lab conditions, and from that they created a white balance setting called UniWB.
You then loaded the WB as a WB preset in your camera, and the issue of overexposing reds went away."

There were some strange ideas floating around in 2007.

I don't think this has anything to do with the post processing software. I have used CaptureOne and Lightroom. Neither makes a difference. From reading many other forum threads (Fred Miranda, etc), it would seem that this happens with all digital sensors. I really can't follow why you think it is post processing software, as any good software allows for adjustments which would correct it. Perhaps you have not seen the issue and don't believe that it occurs.

You say that the red channel is usually dominant. I did a quick check and I would say that this isn't true for my photos. Green is the dominant colour, by a long way. Probably followed by blue (sky is often present).

I do understand how RAW photos are stored. I was assuming that they were AdobeRGB, as they (Canon at least) don't give any wider gamut options. If they are going to be viewed as sRGB then some colour will be lost.

I have researched this extensively by now and I think my conclusions are basically correct (as stated in the previous responses).

A) Camera histogram and exposure warnings (except discrete R,G,B histograms) are inherently weak when single colours are fully exposed.

B) Red is more commonly the single colour for a variety of reasons.
- Reds in nature are much more common than true greens or blues and they are usually brighter. Almost all greens are really a mix of blue and yellow (eg frogs, birds butterflys). The most notable exception is chlorophyll, but even bright vegetation has very significant amounts of blue and red and the green is usually low luminance. It seems bright because our eyes are very sensitive to it. Blues are very rare in comparison to reds and they are rarely bright.
- We notice reds far more than blues. In other words are eyes are more sensitive to reds than they are to blues. It can occur with blue birds like Fairy Wrens.
- There may be IR filter leakage in some circumstances which may add to reds.

- - - Updated - - -

Andrew, I think your photo demonstrates that man made colours also follow the rule that reds are almost always brighter than blues. If you sample the various reds and the various blues in your photo. The reds are almost always brighter. I suspect that most man made greens are mixes of yellow and blue, though I am not sure of that.

arthurking83
09-08-2015, 9:02pm
.....

I do understand how RAW photos are stored. I was assuming that they were AdobeRGB, as they (Canon at least) don't give any wider gamut options. If they are going to be viewed as sRGB then some colour will be lost.

......

The colour space setting in camera is basically only for the purpose of jpg and tiff rendering.
For raw files, there is no colour space. It makes no difference to the capture of the raw data.

Obviously the only part of a raw file that is affected with colour space settings (are) the embedded jpg preview files.

If your camera has a wide gamut review screen(that is aRGB capable), then I guess that the colour space will make a difference to the way the preview image is rendered.
But in terms of capturing more of the colour gamut or more detail for certain channels in raw files .. zero effect.

You probably won't see any of the effects of altering colour space on your raw file if you choose to use C1, DxO, Adobe software and suchlike.
And by that I mean the manner in which it alters the embedded jpg file .. not the rendering of the raw on screen.
Nikon's software(in the old days CNX2 and VNX2) did alter the actual raw data .. so when you changed colour space on the raw file, the embedded jpg preview files(of which there should be 3) also are re rendered to suit.
While I have access to DPP(for testing purposes only). I haven't used it much to see how it affects Canon raw files tho.

As for viewing a sRGB file compared to an aRGB file and losing some colour, this is true only because the file you are viewing(on the screen) is a rendering of the raw file data but as a jpg or tiff type file.

As a side note on why colour space is a meaningless side issue for raw files.

If you shoot in aRGB for a tiff or jpg file type image, you obviously lose some colour data in a conversion to sRGB.
Conversely if you shoot that same image type in sRGB, you don't automatically gain more colour data in a conversion of that sRGB image into an aRGB type. The conversion process tries to accurately map the sRGB colour data into an acceptable aRGB rendering, but the actual data is false. Some strange effects can be seen if you lok hard enough.

Yet with a raw file, if you shoot in sRGB(which is all I shoot in now) not only can you convert to aRGB without any colour data loss, but also the even higher ProPhoto colour space too .. and still no data loss, or strange effects in the colour of the raw photo. The data in the raw file is completely agnostic to the notion of colour spaces.
if it weren't, then you would lose something in shooting sRGB and converting to aRGB or ProPhoto.

besides all that, the actual colour data depth lost in going from sRGB to aRGB is only in the green to blue range. red channel is largely unaffected(look at the gamut triangle areas for each colour space).

ameerat42
09-08-2015, 9:19pm
Can anybody read this?

:oops: :S:orry! The green is a bit garish!

Steve Axford
09-08-2015, 9:29pm
Geez Arthur, you do go on. As I said, I do understand RAW files and I really don't need your lesson. I was just trying to change the subject back to the real issue and maybe I glossed over colour spaces which are an exact science and very well understood, unlike colour creation and colour perception.

arthurking83
10-08-2015, 12:55am
Geez Arthur, you do go on. .....

Apologies.

But if you introduce a variable(in this instance colour space), I have to assume that you may not know that it has nothing to do with the issue.
So if you understand the raw file so well .. why introduce a totally irrelevant topic? :confused013

A topic is only worthy of discussion with somebody once you have determined that the other party truly understands what's being discussed.

So try to see it from my point of view.

You commented on colour space as a possible variable in this discussion.

I explained that colour space has nothing to do with the issue if you're shooting in raw.

you introduce colour space as a variable again .... claiming that Canon don't offer a colour space wider than adobeRGB!!! :confused:

ie.(from my point of view) .. I don't get what you're trying to say! I don't get why you think Canon needs to provide you with a wider colour gamut. So I try to explain it again .. and to be sure I do try to use as few words as possible, even tho the opposite appears to be the case!


So the end result is that:
You see my comments as: 'Geez he can go on'
I see your comments as: 'what is it he doesn't understand about colour spaces and raw'

wayn0i
10-08-2015, 2:21am
This is an interesting discussion, I must say I've never really paid enough attention to one colour blowing out more than the others.

From the electromagnetic spectrum you would think the opposite would the true as the blue end has more energy that the Reds and the sensor is only collecting energy.

Mind you I play with light nearly every work day and the eye can't see blue as well as red.

I would also expect some fluorescent light adding to the Red energy levels, remembering all fluorescent subjects emit red shifted light. So if a component of sunlight is causing a subject to fluoresce then that light will be a red shifted emission from the component causing the fluorescence. Does that make sense to anyone but me?

Must stop you be calling me Arthur soon, without the depth of knowledge




Sent from my iPhone using Tapatalk

arthurking83
10-08-2015, 8:39am
.....

From the electromagnetic spectrum you would think the opposite would the true as the blue end has more energy that the Reds and the sensor is only collecting energy.

.....

Yeah, that's something that'd be easy to 'assume' but the reflective non reflective idea actually makes more sense in a way.

the more blue the light the lower the frequency .. hence more penetrative power.
If it penetrates more, then it's obviously not going to reflect as much.

If we imagine the surface(subject) as a sieve, and the wavelength of light as the material being sifted. The smaller particles(wavelengths) get through the sieve, the large ones are stopped/reflected/collected(or whatever).

ps. I have absolutely no idea if this holds true in any way(in quantum physics) .. it just kind'a makes sense from a physical perspective.

wayn0i
10-08-2015, 8:56am
Yeah, that's something that'd be easy to 'assume' but the reflective non reflective idea actually makes more sense in a way.

the more blue the light the lower the frequency .. hence more penetrative power.
If it penetrates more, then it's obviously not going to reflect as much.

If we imagine the surface(subject) as a sieve, and the wavelength of light as the material being sifted. The smaller particles(wavelengths) get through the sieve, the large ones are stopped/reflected/collected(or whatever).

ps. I have absolutely no idea if this holds true in any way(in quantum physics) .. it just kind'a makes sense from a physical perspective.

I'm not sure that's how it works, surface reflection occurs because the surface colour is the same as the incident light colour, hence shine a red light on a red surface and it lightens the red. White light has the entire visible light spectrum so it lightens any colour, obviously if the surface and incident are different then the surface is either unchanged or darkens.

Is anyone interested is colour wheel and absorption, it's not a big deal for non-technical photography I think.




Sent from my iPhone using Tapatalk

Steve Axford
10-08-2015, 9:28am
It makes sense, Wayne. One of the situations where this effect is apparent is with Scarlet Honeyeaters in sunshine. If the red pigment ( I assume that it is a pigment) fluoresces, then this could be part of the problem. In general it could be another factor than makes red the most likely colour to blow.

Arthur. My apologies too. I shouldn't show my frustration like that. The reason I mentioned colour spaces is that every photo I take either ends up as AdobeRGB or sRGB. If I convert to sRGB, then I might make the problem worse as reds which are nearly saturated before the conversion, may be over saturated after. I lazily called the before space AdobeRGB as it is the colour space I usually convert to and it causes no extra problems. It would have been more correct to call it the Canon 5D III colour space. Since I have no way to display or print any colour spaces other than AdobeRGD or sRGB, I tend to (erroneously) call all colour spaces one or the other. This is a side issue for me as I always use AdobeRGB except when I am selling photos that may be used with sRGB. Then I have to be careful with photos that have very bright reds. I don't need to worry so much with blues or greens as they have never caused a problem.
Since I am happy with my conclusions on the main part of the subject, why reds and not blues or greens, I guess you don't need to comment on that.

- - - Updated - - -

Wayne, I must admit to being fascinated by colour. The whole idea of a colour wheel is peculiar as it relates to our visual system and not at all to wavelengths of light.

MrQ
10-08-2015, 9:33am
If the histogram seems right then how are people judging the reds to be oversaturated? Is it a measurable thing, or just perception?

Red is a colour that stands out more to most humans (assuming good colour vision) - our brains flag it as important (ancient danger sense from blood/fire??). Maybe it's just that.

ameerat42
10-08-2015, 9:39am
A 2ple of points from the immediate foregoing...

1. Blue light has higher frequency than red light... Energy of various frequencies are IMO beyond the scope of the discussion.
2. Sieves aren't necessary EXCEPT as the idea of a color filter (ie, the Bayer filter), ans therefore EVERYthing gets
absorbed by the sensor that GETS THRU the filter (and obviously, and by definition, what the sensor is "sensitive" to).

My own opinion favours two things: some compromise in the A/D conversion in the camera electronics, AND, some compromise
in any logical handling of the D info afterwards.

- - - Updated - - -

Addendum:
Somewhere in the discussion a description of "oversaturation" might be handy.

arthurking83
10-08-2015, 10:04am
From the outset, I have to say I have no idea on how it works either!
So the only thing I'm sure about is that I I'm not sure how it works too :D


I'm not sure that's how it works, surface reflection occurs because the surface colour is the same as the incident light colour, hence shine a red light on a red surface and it lightens the red. .....

But, I do know that it's not as simple as you describe.
I don't know this because I'm a theoretical particle physicist.
I know this because of some of the stuff I've amassed over the years has shown me that it's not so simple.

as one example: I have this old lens. It's not rare, it's very cheap .. and simply an obscure thing that 99.999% of photographers would ignore or avoid.
It has red and orange markings, just as many other lenses do.
When I shine any old torch onto it .. it's just brighter.
When I shine either of my UV torches onto it, not much happens other than the red and orange markings fluoresce(they glow massively bright) to the point where even my eyes can't make out the actual numbers of the markings because they're over exposing.

So the blue light(UV) is making the two red shifted colours(orange and red) even brighter than they otherwise normally are.

In another thread, Steve writes about 'structured colours'.
I've looked into it on only a couple of sites(I found the wikipedia entry easy enough to understand).

Then in your description it seems to hold true:
If you look for images of UV, Vis and IR versions of human skin, the UV version can look almost black(but definitely much darker) where the Vis rendering will look normal to us(for a Caucasian that will be slight red shifted hue), and under IR the skin rendering will look pretty much porcelain white.

So from that, I think that it's the underlying structure of the material and or the the structure of the pigmentation that is the cause of some of these effects.
Once again I don't know this, but simply think it's a possible reason. I suppose something else for me to look into as well.

As an interesting side note too: did you know that bluer light can render a sharper image in terms of detail.
It probably doesn't hold true for the very slight relative differences between visible blue and visible red light
If the detail is there to be captured, under a more UV oriented lighting setup more detail can be captured compared to a setup where lower frequency lighting is used(ie. Visible of IR).

- - - Updated - - -


.....
Somewhere in the discussion a description of "oversaturation" might be handy.

What you say makes sense Am.

Now firstly I'm not claiming to be an expert on the topic.
Secondly, I keep coming back to the topic of software(as a partial problem to this topic).

I can't find enough material yet, but as a quick point I want to make.
I have so many images of instances where in one software editor(remember we're referring to raw files here!!) a red channel blown out by about +1Ev, but the same unedited image is rendered about -1Ev below full saturation in another editor.
This is going by the supposed histogram.
Then the issue is using another editor again(for verification) can display a perfect exposure in the red channel.

A 1Ev shift either way in the respective editor(s) can produce almost identical histograms to the other version.
There's no change in the actual raw file to explain such variations .. so the only plausible explanation (to me) is that this is primarily a software issue.

Steve made a comment earlier about histograms that don't display overexposure while the image appears too, and I've seen this myself too.
Again, the software used in such situations has to be questioned too(one of my dislikes of Adobe raw conversion software).

Problem at the moment is that I haven't reinstalled many of the software to show this, but I will soon.
Then I'll have to find more images to post as samples .. hopefully to confirm that I'm not seeing things, or going mad.

Steve Axford
10-08-2015, 10:49am
If the histogram seems right then how are people judging the reds to be oversaturated? Is it a measurable thing, or just perception?

Red is a colour that stands out more to most humans (assuming good colour vision) - our brains flag it as important (ancient danger sense from blood/fire??). Maybe it's just that.

The frame may not be overexposed on a normal histogram, but may be on the red channel with an RGB histogram. The average metering says it is ok, but you need to use the rgb histogram. Does that make sense?

- - - Updated - - -

Arthur, you said
"In another thread, Steve writes about 'structured colours'.
I've looked into it on only a couple of sites(I found the wikipedia entry easy enough to understand). "
Structural colours are very common in nature, but probably have no bearing on this discussion as reds are, it seems, usually pigments. It is common to get good red pigments, but very uncommon to get good blue pigments. Consequently, blues are often structural colours and greens are often a combination of a yellow pigment and a blue structural colour. While structural colours can be so bright that they are almost painful (eg some blue butterflys), they are usually not that bright. We could have a thread dedicated to structural colours, but I think I tried that and there were few takers. Suffice to say, we can safely ignore structural colours in this thread.

- - - Updated - - -

Again, to Arthur, you said
"Steve made a comment earlier about histograms that don't display overexposure while the image appears too, and I've seen this myself too.
Again, the software used in such situations has to be questioned too(one of my dislikes of Adobe raw conversion software)."

I think that compared to the RGB histogram on the camera that there may be a small shift towards over exposure in the RAW processor, but the effect, if any, is very small. The main effect is when using the camera light meter (pre photo) or the average histogram after the photo. There, the difference is in camera or in RAW processing software. I really see no indication that this is merely a software problem. Also, it occurs with C1 as well as Adobe.

wayn0i
10-08-2015, 10:58am
Arthur,
Just quickly on your lens orange and dots example, these are two different responses the White light being reflection the uv light being fluorescence so completely different modes mate.




Sent from my iPhone using Tapatalk

bitsnpieces
10-08-2015, 11:36am
Simple solution, and I dare say it, switch to Sony - live view with custom white balance/kelvin, and I mean custom, not just picking a mode like sunset, cloudy, incandescent, etc, but each one can further be customised - make it less red, add more blue, more green, or more red, less blue, more green, etc - I've found it very useful

*now tries to find Ricktas to see where he's run off to to hide also*

swifty
10-08-2015, 12:14pm
I'll have to read through the entire thread in more detail when I get more time as I've been moving the past weekend but just skimming through I didn't see anyone touch on Bayer filtration.
Not sure if this is at all relevant but could some of the issue be due to Bayer demosiacing of an area containing almost entirely one colour since Bayer requires neighboring pixels for RGB data?
I haven't dabbled in Sigma fovean sensors yet but do they have the same reds (or other colour channels for that matter) saturation issue?

Steve Axford
10-08-2015, 12:45pm
Swifty, I have heard this mentioned on other sites, but it never developed much traction. That may be because most people, like me, don't really understand it. Still, it doesn't explain why only red or why it is correctable by underexposing. I did read a post by one person who said that this type of thing had always occurred, even with film. Good photographers have always understood that light meters from a distance are not always enough. You have to spot meter possible highlights separately. Of course, doing this with a flighty, little bird isn't possible, but then, there were very few flighty, little bird photos in the days of film.

swifty
10-08-2015, 1:23pm
Yea.. I must admit I don't understand it that well either. Since the histograms are generated from the jpegs, if the reds are around 250-255, the green and blue channels are very low, let's say <20, then what's the value of the entire pixel?

Also, if you shot a colour chart under controlled situations. As you increase exposure closer to the highlight limits, is it just the pure red patch that has problems or the pure blue and greens too?

Steve Axford
10-08-2015, 1:46pm
So it's just the jpeg that is used for histograms? That should have a definite colour space (Arthur???).
The blown highlights seem to be confined to reds, at least in nature. I'm sure there are examples of blues and greens (people have commented on blue fairy wrens), but I do not see it. I have photos of red fungi, taken in shadow, that registers as pure red, eg 180 red, 0 green, 0 blue. I never see that with blues or greens.

This one is very red.
https://steveaxford.smugmug.com/Living-things/Fungi-the-recyclers/Gilled-fungi/Mycena/Mycena-viscidocruenta/i-CKrK3mh/2/XL/_74Z2190-merge-XL.jpg

I understand that the algorithm for calculating total exposure is : green x 0.6, red x 0.3, and blue by 0.1, based on our eyes sensitivity. I'm not sure that quite makes sense, but that's what I have heard (seen in posts).

arthurking83
10-08-2015, 3:03pm
I found a couple of images of the 'software issue' I previously referred too.

ViewNX2:
119094

This screen cap is set with ViewNX2 to indicate the blown highlights being referred too in the histogram window. Hence the image is black with the bright red blotch. The black hides the image and simply displays the lost highlights(for any Nikon user/ViewNX2 user interested in how this works with the image loaded press H for highlight indication .. press S for shadow losses.

note the histogram tho.

ACR(version can't be remembered, but not too distant)
119095

Histogram of a supposed blown out red channel is not only totally different but wildly totally different on the same raw file.
It should be noted that no processing at all has been made via any of the raw editors here.

The image is straight from the camera(raw of course).

Some of the processing I've tried.
In ViewNX2, set Picture Control to Portrait drops the blown out area to a very few micro dots(pixels) no WB or any other edit. Blown highlights gone.
Change WB to an appropriate setting of fluoro(high colour rendering in this case). This setting balances the rest of the image and looks quite natural. For reference that WB temp turns out to be 4100K, zero R-G shift.

In ACR(which I'm quite hopeless with .. but not the point)
While the image needs no editing to recover any detail, it is quite dark by comparison in some areas.
To my eye(at the time .. image being shot in 2010 tho!) the ACR rendering was pretty much wrong, and too contrasty and lost shadow detail. Colours generally OK, even the large red window blind that the image references looked similarly coloured and exposed(ie. very bright)
But the Nikon camera, and subsequently VNX2 rendering were closer to reality overall.
Same(or similar as can be) settings for WB contrast and so on were trialled in ACR to those in VNX2 to no real avail to match the VNX2 look.

The point isn't which editor created a better rendering or image .. or works better.
More so why such a massive difference in the red channel on the exact same image.
Colour spaces don't explain it either, as I've tried various colour spaces too up to ProPhoto(via CaptureNX2). The difference in software(CNX2 vs VNX2) also don't explain it, as they render identically.
Also Nikon's latest software CNX-D also renders the images identically to Nikon's older software.

So (at least part of) the answer lies in the software's rendering capability.
it should be remembered that for a software manufacturer to allow compatibility with the various hardware(ie. cameras/sensors) they need to produce profiles.
They can only do so much for various conditions, and for conditions outside of those boundaries the user should create appropriate camera profiles.

At the risk of 'going on again' the issue can be as simple as:

manufacturer provides their set profiles. Those profiles were made in lab conditions, under various lighting types(eg. fluoro/incandescent/halogen/simulated sunlight etc)
But that profile isn't as accurate as a real profile made by the user under real conditions. As an example; a landscape shot outdoors at sunset under a cloudy sky filled with smoke or something weird like that.
The specifics aren't important, but I'm just trying to highlight that the differences may be.

Of course in some situations tho, such as the structured colours previously referred too will make a red channel hard to capture with a good overall exposure balance .. butterfly wings, birds feathers etc.
Same with the UV fluorescence too.

But in many instances, the more common ones not explained by the extreme situations as above, the solution could be as simple as software.
Again I centre my experiences around a Nikon environment and have little to no experience with many other brands.

I used to do the underexposing thing myself too, until I noticed that the issue wasn't as real as I thought it was.
I just changed the way I process the images .. via a few meagre edit steps.

ps. hopefully that wasn't too arduous for a reply :p

- - - Updated - - -


So it's just the jpeg that is used for histograms? That should have a definite colour space (Arthur???).....

Yep! Gotcha.

But how does this affect the raw file back home on your computer?

This is only relevant on the camera's review screen.(to a degree)

The jpg file on the cameras review screen has been common knowledge for many years. It's based on two jpg images within your raw files(of which there or should be three).

I think only recently, some manufacturers have used aRGB capable LCDs on their cameras(or something to that effect) or screens with adjustable hue capability(or whatever).
I just can't really remember all the newest tech stuff with respect to this.

But I did mention that yes a colour space setting does affect a raw file in a very limited and strange way, and that was only in the embedded jpg files in the raw file.
Once again, if I have to reiterate it again, I used to shoot in aRGB mode on the camera too, because all these supposed experts used to shout loudly that it's the widest gamut and best quality blah, blah blah ....

But on the file you're actually interested in, it has no real life bearing .. it just creates potential issues later .. and I tried to explain use the lowest common denominator to eliminate these issue completely.

set the camera to sRGB, and your software to ignore this setting for any raw files, and use a wider colour space.

But, I have to reiterate that for the purpose of this discussion, colour space(in camera) has no bearing.

I should explain too that long ago I also looked into this colourspace and embedded preview file topic.
I shot in one colour space, converted to another, extracted the jpg preview file, reset the colour space on that preview file, inserted it back into the raw file and totally screwed the thumbnail of the raw file whilst not affecting the actual raw file rendering.
The only thing that looked like rubbish on those raw files was the thumbnail, so in any software that renders the raw file via the embedded jpg files the initial view was that it looked like rubbish .. until opened properly.

Steve Axford
10-08-2015, 3:10pm
Still not sure this has any meaning today, Arthur, particularly as it only seems to apply to Nikon. I am really talking about real world examples. I'll show you 4 photos that show the strongest red, blue and green of all my fungi photos (I have perhaps 10,000 to choose from). The red is way out in front with the purity of the colour, followed by the blue with the green last. The green bioluminescent fungi is quite strong, about the same as the blue but nothing like the red. The natural light green is very muddy. This, by itself, can explain why this problem occurs mainly with red.

RED most saturated sample - R211, Green0, Blue0
https://steveaxford.smugmug.com/Living-things/Fungi-the-recyclers/Gilled-fungi/Mycena/Mycena-viscidocruenta/i-CKrK3mh/2/XL/_74Z2190-merge-XL.jpg

GREEN most saturated sample - R133, Green213, Blue214
https://steveaxford.smugmug.com/Living-things/Fungi-the-recyclers/Gilled-fungi/Hygrocybe/Hygrocybe-graminicolor/i-vLVwdQ2/2/XL/_MG_2371-XL.jpg

GREEN bioluminescent most saturated sample - R4, Green222, Blue109
https://steveaxford.smugmug.com/Living-things/Fungi-the-recyclers/Gilled-fungi/Mycena/NR-Luminous-fungi/i-L9tb398/0/XL/_H9A4784-XL.jpg

BLUE most saturated sample - R8, Green131, Blue252
https://steveaxford.smugmug.com/Living-things/Fungi-the-recyclers/Gilled-fungi/Entoloma/i-8gWh4r6/2/XL/VE8M2443-XL.jpg

ameerat42
10-08-2015, 4:47pm
...but could some of the issue be due to Bayer demosiacing of an area containing almost entirely one colour since Bayer requires neighboring pixels for RGB data?
I haven't dabbled in Sigma fovean sensors yet but do they have the same reds...

In the direction of this end, I have taken a few shots of red objects with same - just what was to hand in the late afternoon and sun-lit.
Will pot 'em up later, but will do minimal processing - just basically just -> jpegs.

arthurking83
10-08-2015, 5:00pm
Still not sure this has any meaning today, Arthur, particularly as it only seems to apply to Nikon. I am really talking about real world examples. .....

1. which part? the raw file/slash embedded jpg files, or the software difference issue(or the entire thread :p)

2. the problem with real world files is that they vary so much in composition and processing, that you can't really use them as real study cases for the topic of why the red channel blows out .. or if it actually does.
A reference point is needed.
As for images with colour purity, it's a trivial matter to print out a pure Red or Green or Blue only swatch and photograph it and present it as an instance of colour purity and possibly a blown channel if this is desired.
I realise that this is not a real world situation for many of us(but it may be say for a product photographer in a studio shooting certain products)

I understand your point about real world images tho .. for me that used to be predominantly landscapes captured at sunset .. where reds would blow out wildly causing me to underexpose, only to find out that if I used X brand software the issue was non existent. Then I'd follow that up with an application of -2% contrast reduction which would miraculously recover the blown red channel.

We know that from this thread, that the red channel is more sensitive. The reason is almost certain to be many fold.

1. red simply reflects stronger(as per the UV vs IR comparison)
2. image sensors almost certainly are more sensitive to red channel as well(I know the D70s, and hence D100) had a sensor very sensitive to the red/IR channel way back in their days)
3. software can alter this perception(because vendors produce their own specific raw conversion algorithms).

From some of my observations over time as well, it's easier to recover shadow detail in blue when it's lost, but harder to recover lost blue highlight detail.
It's exactly the opposite for reds tho, but in saying that I've only seen very few images of mine where I've grossly underexposed the red channel(that cyan-ish green colour isn't one I particularly look for).

Just one last thing re colour spaces too. Have you ever tried ProPhoto as an alternative?
Like I said earlier, I'm not all that well versed in Adobe, nor Capture 1 either.
But I was under the impression that Lr was set by default to work in the ProPhoto colour space.

ricktas
10-08-2015, 7:35pm
*now tries to find Ricktas to see where he's run off to to hide also*

*not hiding, just watching*

wayn0i
10-08-2015, 10:26pm
This red mushroom image is not far from clipping in the red channel from what I can't see?

- - - Updated - - -

Steve,

Completely off the subject, these mushroom images of yours are amazing, love that bioluminescent green image!

Mark L
10-08-2015, 11:29pm
Many words have been typed but I'm going to try this potentially simple thing,


Then I'd follow that up with an application of -2% contrast reduction which would miraculously recover the blown red channel.

Kym
11-08-2015, 12:10am
The colour red is the fastest colour (according to physics), so when taking an image, the red hits the sensor first.

Que? Is not the speed of light constant.
The wavelengths are different.



Visible light is usually defined as having a wavelength (https://en.wikipedia.org/wiki/Wavelength) in the range of 400 nanometres (https://en.wikipedia.org/wiki/Nanometre) (nm), or 400×10−9 m, to 700 nanometres – between the infrared (https://en.wikipedia.org/wiki/Infrared_light) (with longer wavelengths) and the ultraviolet (https://en.wikipedia.org/wiki/Ultraviolet_light) (with shorter wavelengths).[2] (https://en.wikipedia.org/wiki/Light#cite_note-Pal2001-2)[3] (https://en.wikipedia.org/wiki/Light#cite_note-BuserImbert1992-3) Often, infrared and ultraviolet are also called light

Update: It depends on the medium http://www.quora.com/If-purple-light-has-more-energy-than-red-light-shouldnt-purple-light-travel-faster-inside-glass

bitsnpieces
11-08-2015, 12:42am
*not hiding, just watching*

Ah, you're right... where did I get hiding from... ??? :confused013



Que? Is not the speed of light constant.
The wavelengths are different.

Update: It depends on the medium http://www.quora.com/If-purple-light-has-more-energy-than-red-light-shouldnt-purple-light-travel-faster-inside-glass

Yes, red is part of the spectrum of light, and light is only one speed.

My remark was more on how physic classes teach if you split light up, red is the fastest colour of them all, so just playing on that. :)
Which is why playing on the lens such as red's going to refract in onto the lens faster than the other colours. :P

But yeah, I'd have no idea really :confused013

ricktas
11-08-2015, 7:16am
Maybe it is lens design.

Take the ocean, it looks blue cause the sky is blue, why does the sky look blue, cause the water is blue. But go deeper than that. When light hits the ocean at the surface the water appears clear to humans if we are in it. As you dive deeper, light spectrums drop away and then the water around you does appear blue, after that as you go deeper still it appears as a blackness.

What causes this, the water does.

So all this talk about red is based on the assumption that our lenses are perfect, that they capture exactly what is in front of them and pass that perfect light onto the sensor. What if... the elements in our lenses are not all that perfect, and just perhaps the exact modern lens design that makes taking photos so much damn fun, also has a yet undiscovered issue related to the light spectrum that causes some frequencies to pass more readily than others? Thus those frequencies end up being over-saturated.

Just thinking out loud. Cause I agree, that reds do over-saturate more readily, but to find out why, we need to consider all possibilities.

arthurking83
11-08-2015, 11:22am
I'm pretty sure this is why we see chromatic aberrations in some lenses.

As the light travels through the different mediums(of glasses) the wavelengths all begin to travel at different speeds due to this interference.
Because they travel at different speeds plus the combination of bending light through the various concave and convex glass surfaces .. the now separated wavelengths all land at different locations at the sensor .. and it looks like chromatic aberration to us.


And for Mark L: that -2 contrast reduction is only for ViewNX2 users(on NEF raw files).
I generally only tend to use this software, sometimes CaptureNX2 .. very very rarely Lightroom(4 point something tho!)

I doubt that the same step would work in Adobe(if you're using that).
I think from memory you also use DPP(can't remember?). I used to have it(for testing stuff) but haven't reinstalled it. Try some settings for yourself and let us know what happens.

arthurking83
11-08-2015, 12:27pm
sorry got disturbed by a knock at the door.

To add further to what I was saying to Mark.

My understanding of this thread is more along the lines of why do reds expose more relative to the other two channels.
Which is different to why are my images over exposing.

So my -2 contrast method is really relevant to images where the reds are seemingly ok looking in general, but appear to be overexposed relative to where the the blue and green channel.
This is different to an over exposed image!

My screen shot of the ViewNX2 rendering is more closely referencing the issue.
Look at the blue green cyan histogram. All the data is in the lower(below mid range) in the ViewNX2 rendering but exposed quite ok considering both the real life condition at the time, and the rendered look of the image.
By comparison, for the ACR rendering the histogram shows that the red should be a lot darker, which it isn't in the actual image. The red colour(which is a window blind .. now gone thank god!) appears about right.
But the histogram shows that it should be a lot darker .. maybe a dark red/burgundy colour rendered image.

The blue-gree-cyan histograms tho .. while they are obviously different between the two software, they a lot more closely related.

So the question is more about why is the red channel displaying a wider variation while the variation in the other channels aren't?

ameerat42
13-08-2015, 3:03pm
...I haven't dabbled in Sigma fovean sensors yet but do they have the same reds (or other colour channels for that matter) saturation issue?...

Swifty. Here are some (at last!) from today, in middish-day sunlight, as the afternoon sun proved bothersome with a breeze blowing. They're all in FULL sun, so... Only the 1st image shows a straight
conversion to jpeg, and a toned down one by comparison. The others are all straight conversions except for the odd 1/2 stop decrease in exposure, and a complementary 1/2 stop lightening of
shadows, as notated.
Am.

1. Poinsettia - Left: as taken, Right: 1/2 stop less exposure in conversion

119152


2. ??Rhododendron (as taken) and Camellia (1/2 stop reduced)

119153


3. Euphorbia and a nice YELLOW (for comparison) succulent (1/2 stop down on exposure)

119154

swifty
13-08-2015, 5:42pm
Thanks Ameerat. I'll have to have a better look when I get home but just to clarify, which camera model were these shot on and with which converter?
And out of curiosity, are the Fovean colours true to what you see in real life as I'm unfamiliar with the flower varieties.

ameerat42
13-08-2015, 9:28pm
ΣSD1M and SPP5.4 (Sigma Photo Pro).
Your Q: In SPP, Yes. and pretty much (most time) in the browser.
I do have some problems on this laptop (Asus) if I don't pick the right color space.
For these, I seem to think they are a bit warm, but not much. Photoshop is where I get some trouble,
and the only way to "solve" it is to discard color profiles when opening files, even though they are the same.
Oh, well!!

- - - Updated - - -

PS: Looking at this post for about the 5th time, I will say that ALL but the Euphorbia look "normal".