PDA

View Full Version : Removing an 'edge glow'



rexboggs5
09-08-2019, 8:36pm
I recently was in the US and near Mt Hood in particular, where this photo was taken. I am seeking some advice on how to remove the 'glow' around the tops of the trees. It was there in the original photo but some post-processing has enhanced it. Any and all suggestions would be most appreciated.
Thanks and Cheers, Rex

140803

farmmax
10-08-2019, 12:17am
You might like to have a look at this youtube video which shows a pretty easy, sneaky way to fix the halos.

How to Easily Remove Halo from Picture in Photoshop (https://www.youtube.com/watch?v=xUCqvs3-H5s)

rexboggs5
10-08-2019, 6:43am
Thanks, that is exactly what I was looking for.
Cheers, Rex

- - - Updated - - -

I was watching the video in YouTube and noticed the first video in the list covered the same topic. It shows a more complex method if the area of the glow includes trees. Here is the link to that video: https://www.youtube.com/watch?v=o4Sd1KTj0BI
So I will try the easier method first, and if that isn't satisfactory, I will try the other method.
Cheers, Rex

John King
10-08-2019, 10:38am
Stay away from overuse of the "claret" slider (clarity slider) in images like this?

Personally, I find that using the minimum amount of editing necessary in the widest possible colour gamut with high bit depth works well in avoiding this kind of problem in the first place. I use ProPhotoRGB 16 bit, but aRGB 16 bit is 'safer' in that you can see what you are doing better - assuming that you have a decent aRGB monitor.

rexboggs5
10-08-2019, 3:21pm
John - thanks for your advice. I haven't ever used ProPhotoRGB so I'll have to investigate that. And I've never heard of aRGB so I have no idea if my monitor supports that format. Some exploring to do ...
Cheers, Rex

John King
10-08-2019, 5:49pm
Rex, an understanding of colour spaces and bit depth is pretty important.

I call Adobe RGB (aRGB) 'safe' because it can be viewed (seen) on a high bit depth, wide gamut monitor. aRGB is a reasonably wide gamut colour space.

The monitor I use is a Dell UP2516D. I replaced my Asus PA246 earlier this year. Bought a Dell UP2516D monitor. The Asus now lives on my second editing PC for scanning and stuff. The Asus is also getting pretty old now, and maybe its colour correctness is not as good as the Dell.The Dell is not 4K, only 2560x1440. With my eyesight getting worse each year and the monitor resolution going up each year, I can only just see the pixels on the new monitor with my computer specs on.

Gradually, the two specifications will converge ...

Main reasons for upgrading were the higher resolution, better panel and colour lookup table. Asus was 1920x1200, 12 bit LUT and 10 bit panel, 98+% aRGB. Dell is 2560x1440, 14 bit LUT and 12 bit panel, 100% aRGB.

My problem with 4K monitors was that there was nothing I was prepared to spend the money on that was close to 100% aRGB. They didn't get past that in my considerations, as I use ProPhotoRGB 16 bit for editing.

My Epson printer will print much of a PPRGB colour space, and is a 16 bit device.

Colour spaces and bit depth are fairly complicated subjects ...

John King
10-08-2019, 7:39pm
BTW, forgot to mention that one reason for using a wide gamut, high bit depth colour space is that these allow much wider editing latitude.

Hope that this helps a bit.

rexboggs5
10-08-2019, 7:54pm
Thanks John. I use 8-bit Adobe RGB for print photos and 8-bit sRGB for digital photos. I'll have to look into 16 bit depth and ProPhotoRGB to see what the differences/benefits are.
Anyway, for now, I have removed the edge glow using the technique shown in the video - see the attachment.
Cheers, Rex
140830

- - - Updated - - -

Thanks again John. Can you explain what is meant by a 'wider editing latitude'?
Cheers, Rex

John King
10-08-2019, 9:41pm
You're welcome, Rex.

Every kind of image is just a data file, either in raw format or as a recognisable image file (.JPG, .TIF, etc).

Regardless of what file type, each colour pixel in the Bayer array is represented by a bit value in the resulting file. In an 8 bit processed file (e.g. an out of camera .JPG), that means that each pixel can have one of 256 values (2^8 = 0-255). In a 16 bit file (colour space), each pixel can have one of 65,536 values (2^16 = 0-65,535). This means that moving a colour value by one bit value has far less impact on a 16 bit file than on an 8 bit file (this is grossly and crassly simplified, but the point made is still correct).

Many cameras also use 12 bits to represent raw file values, some use fewer bits, some use lossy compression, some use 14 or 16 bits (medium format cameras usually use 16 bit raw files).

Regardless, when the raw file is converted into an image file, the bit values are mapped into either an 8 bit colour space - irrecoverably crunched/flattened/lost data - or expanded into a 16 bit colour number. The latter preserves far more data than using an 8 bit editing workflow.

A 12 bit raw file contains 2^12 = 4096 values for each of the four channels - each part of a Bayer array is made up of 4 pixels - a red, two different greens and a blue pixel, usually represented as RGBG, but there are some really weird colour filter arrays out there! My 2003 Nikon Coolpix 5000 used CMYG, or similar.

More possible bit values for a given pixel translates into more editing latitude and better colour differentiation, more subtle editing is usually the result.

The colour space determines how large a colour gamut these numbers can represent.

Always remember that you can always compress the bit depth of a file, but you cannot expand it! The same goes for wide to narrow colour spaces.

In practical terms, an 8 bit sRGB .JPG generated from a 16 bit aRGB colour space raw will look much better than an OoC sRGB JPG - trust me!

farmmax
11-08-2019, 2:26am
Anyway, for now, I have removed the edge glow using the technique shown in the video - see the attachment.
Cheers, Rex
140830



The attachment isn't showing up for me :( It would be interesting to see it.

rexboggs5
11-08-2019, 6:42am
Thanks John for your comprehensive advice. Some work for me to do!

Farmmax and others - the file can be viewed here: https://www.dropbox.com/sh/aycgrffc3oh6t9e/AADYL0rVNZPLuU8VaJec9nFWa?dl=0
Cheers, Rex

gcflora
12-08-2019, 1:14pm
In an 8 bit processed file (e.g. an out of camera .JPG), that means that each pixel can have one of 256 values (2^8 = 0-255).

Hmm.. typo? 8 bit formats (like colour JPEG) use 8 bits per channel (8 bits each for red, green and blue). So the maximum number of colours a pixel can have (i.e. the bit depth) in a colour JPEG is 2^(8*3) = 2^24 = 16777216. (https://en.wikipedia.org/wiki/JPEG#Encoding)

Looking at the source code for ImageMagick it looks like 12-bit (per channel) JPEGS are at least possible -- it checks for > 8 bits per channel and outputs an error if > 8 is specified by the file. I've never encountered a JPEG with 12 bits per channel so maybe ImageMagick has the check just as standard programming practice... dunno, it just seem odd that the error message is "12-bit JPEG not supported" [1]. Unfortunately the JPEG standard (ISO/IEC 10918) is not free so I cannot check. T.871 (05/11) and T.872 (06/12) are available but they're about the closely related JFIF and only mention 1 or 3 channels and 8 bits per colour channel support ("The encoded image in the JPEG File Interchange Format shall have 1 or 3 colour channels and 8 bits per colour channel.") Anyway, the 1 channel version is for greyscale (256 shades).

Edit [1]: After looking more closely that might be there for TIFF files using JPEG compression, not for JPEG files per se. At any rate 256 colour greyscale or 24-bit colour is standard for JPEG. Reading through stuff at https://github.com/LuaDist/libjpeg suggests that 24-bit is normal (e.g. phrases such as "JPEG's 24-bit output") for storage. Interestingly that library plans to add support for reducing colour precision, (e.g. 24-bit to 15-bit) but this is post-processing after the 24-bit JPEG data is read.

John King
13-08-2019, 2:05pm
Craig, I think that you might be confusing the sensor characteristics with camera/software processing.

There are four channels in a Bayer array (the colour filter array, or CFA) - 1x red, 2x green, 1x blue. The camera then interpolates this into a JPEG file (of some description) which includes interpolated colours and luminosity.

Things are very different for RAW files.

The bit depth I am talking about is what is available to post-processing s/w after the camera has recorded the data in a file on the card. JPEG interpolation is a very complex subject, as you rightly state.

One of my reasons for choosing Olympus in the first place is that I have several uses for OoC JPEGs, and Olympus is the only maker (to my knowledge) that has 2.7:1 JPEG compression (LSF) rather than 4:1 (LF). This makes a big difference IMHO.

gcflora
13-08-2019, 2:29pm
Craig, I think that you might be confusing the sensor characteristics with camera/software processing.

There are four channels in a Bayer array (the colour filter array, or CFA) - 1x red, 2x green, 1x blue. The camera then interpolates this into a JPEG file (of some description) which includes interpolated colours and luminosity.


Yes, the camera is "seeing" at least 36 bits per pixel (12 bits per channel) which is taken from the CFA using 12-bits per channel (assuming the camera uses a 12-bit DAC... some very very very expensive cameras might use 16 bits per channel (36 bits) which can be still be stored using an unmodified TIFF format, or maybe even more bits per channel but 16-bit DACs are expensive, not to mention how much the sensor would cost). A jpeg processed by the camera will only have 24 bits (8 bits per channel) like any other colour JPG




Things are very different for RAW files.

The bit depth I am talking about is what is available to post-processing s/w after the camera has recorded the data in a file on the card. JPEG interpolation is a very complex subject, as you rightly state.

One of my reasons for choosing Olympus in the first place is that I have several uses for OoC JPEGs, and Olympus is the only maker (to my knowledge) that has 2.7:1 JPEG compression (LSF) rather than 4:1 (LF). This makes a big difference IMHO.

Maybe I am confusing things, but "In an 8 bit processed file (e.g. an out of camera .JPG), that means that each pixel can have one of 256 values (2^8 = 0-255)" seems wrong. The only processed files that use only 8 bits per pixel (not per channel) are single channel -- i.e. monochrome -- files. A normal colour jpeg, after the camera has interpolated the data, has 24 bits per pixel of colour data to work with when post processing (i.e. each pixel can have one of just over 16 million values). A raw file (assuming a 12-bit sensor/DAC) will have 36 bits per pixel to play with; i.e. 68,719,476,736‬ colours per pixel (or, put another way, 2^12 = 4096 values per channel). Many raw "formats" are modified TIFF files.

From https://books.google.com.au/books?id=ZBijBgAAQBAJ&pg=PA88&lpg=PA88&dq=68,719,476,736%E2%80%AC+camera&source=bl&ots=ZxU-PDhj3B&sig=ACfU3U0funSnNGqoXOgFqmAXiOTvkTbYEQ&hl=en&sa=X&ved=2ahUKEwi2zvTt__7jAhXzheYKHaavA-MQ6AEwBHoECAUQAQ#v=onepage&q=68%2C719%2C476%2C736%E2%80%AC%20camera&f=false

140876
140877

So, yeah, I'm confused :)

Dylan & Marianne
13-08-2019, 9:03pm
For what its worth, I had a slightly different approach to the initial link given !

https://www.youtube.com/watch?v=i6aADOLKszM

rexboggs5
14-08-2019, 5:41am
Hi Dylan and Marianne - Thanks for sharing this link.

For other AusPhotography members - Dylan and Marianne, based in Adelaide, are two of the best landscape photographers on the planet! I recommend that you go to 500px.com, search for Dylan Toh or for Marianne Lim and have a look at their marvelous photos.

Cheers, Rex

ameerat42
14-08-2019, 6:47am
^My usual method too, when necessary.

John King
16-08-2019, 9:18am
Craig, some thoughts worth contemplating, IMO.

https://gregbenzphotography.com/photography-tips/8-vs-16-bit-depth-photoshop

ameerat42
16-08-2019, 9:21am
^Fairly clear explanations (for once).

gcflora
16-08-2019, 2:01pm
Craig, some thoughts worth contemplating, IMO.

https://gregbenzphotography.com/photography-tips/8-vs-16-bit-depth-photoshop

From that link:


Which means that an 8-bit RGB image in Photoshop will have a total of 24-bits per pixel (8 for red, 8 for green, and 8 for blue)


So exactly what I was saying. 24 bits per pixel is 16 million possible colours per pixel, not 256. Oh well :)

ameerat42
16-08-2019, 2:03pm
I 4 1 am not disputing you, GC :D

gcflora
16-08-2019, 2:16pm
I 4 1 am not disputing you, GC :D

Having thought about this for a second I think me and John King are actually saying the same thing but in different ways. Having been programming since... I dunno, around 1984... I've worked throughout that time on bitmaps (including weird stuff like Amiga's HAM mode) from a programming and hardware level quite a lot. Yeah, most camera sensors have a Bayer filter over the sensor, so each "sensor site" (assuming an 8-bit sensor/ADC) can have one of 256 different values. But in this scenario I don't call each "sensor site" a "pixel". It's not a "pixel" until after the raw data has been processed, IMO (in a typical 8-bit sensor with a Bayer filter there are 4 different sensor sites representing one pixel). I think this is where we've got our wires crossed. I'm calling a pixel the single "dot" that results from processing the sensor data. I think it's probably that John King is talking about sensor sites and referring to those pixels -- if I'm wrong I apologise, but I think this conversation/confusion is probably just a semantic misunderstanding.

ameerat42
16-08-2019, 2:26pm
Yes, I've seen plenty of arguments about "pixels" and "photosites"... :nod: :D

John King
16-08-2019, 3:55pm
From that link:


So exactly what I was saying. 24 bits per pixel is 16 million possible colours per pixel, not 256. Oh well :)

256^3 = 16,777,216 ... :nod: ;).

John King
16-08-2019, 5:22pm
Craig, I've felt all along that we were talking at cross purposes - I.e. in heated agreement with each other, using different words.

Bensch
16-08-2019, 7:47pm
You might like to have a look at this youtube video which shows a pretty easy, sneaky way to fix the halos.

How to Easily Remove Halo from Picture in Photoshop (https://www.youtube.com/watch?v=xUCqvs3-H5s)

Great video, thanks for sharing it :th3:

rexboggs5
16-08-2019, 9:34pm
John - thanks for that article on bit depth. It is very informative.
Cheers, Rex

John King
16-08-2019, 10:23pm
John - thanks for that article on bit depth. It is very informative.
Cheers, Rex

rexboggs5 You are welcome, Rex. I have a huge amount of literature and URLs pertaining to bit depth and colour spaces. So many that it would be a thread bomb if I could find them all ...