PDA

View Full Version : Either the camera moved more than I thought, or the lookout I was standing on moved?!?



tandeejay
30-11-2017, 9:38pm
I was up on Mt Gravatt the other night, and took a couple of long exposure photos of Brisbane city.

This one turned out ok

https://farm5.staticflickr.com/4550/23876053697_ae7737841a_b.jpg (https://flic.kr/p/CnR1We)
Brisbane (https://flic.kr/p/CnR1We) by John Blackburn (https://www.flickr.com/photos/99216317@N08/), on Flickr

But this one seems to have some serious camera movement?!? :confused013

133591

So what happened?

I didn't have my tripod with me so I placed the camera on the solid steel railing of the cafe on Mt Gravatt.

I did the same thing for each photo ISO100, f/9, 30 seconds, but the second one seems like the camera is slewing sideways.

I realized later that I forgot to turn off the vibration reduction. Could that have caused this? or did my attempt to hold the camera still actually cause more movement than I realized? The steel railing of that cafe is box tubing so maybe 3" wide, and very solid.

Cheers,
John

nardes
01-12-2017, 7:12am
Hi John

It looks like #2 has a combination of camera shake when the shutter was released (the short wavy line at the start of the movement trail) then some linear movement where the camera appears to be sliding or rotating more or less horizontally.:confused013

The movement is approx. 100 pixels in the 1200 pixel wide frame so it is highly unlikely that this can be attributed to VR being on - I suspect that the amplitude of any VR corrections would be much, much less than 100 pixels.

Does the road up the mountain still close at 11:00pm to 6:00am and did you feel safe up there?

Cheers

Dennis

Geoff79
01-12-2017, 7:48am
The second one offers an interesting result, almost like you’re witnessing an attack of a mass of colourful little jet fighters or something.

Well spotted, George Lucas... uh, I mean John.

The cityscape looks nice. I like that haze of cloud above the scene.


Sent from my iPhone using Tapatalk

tandeejay
01-12-2017, 8:43am
Dennis, thanks for your thoughtful technical insights. The original being at 6000 pixels wide, that 100 pixels in this would have been much closer to 500 pixels. If I hadn’t had my hand on the camera, the weight of the lens would have pulled the camera off the railing I had it perched on. Now I want to get up there again with my tripod so I can shoot hands free. Not sure if the road is closed after 11pm, however I suspect it is, as the gate at the bottom of the road is still there. Was concentrating on the drive, and failed to observe the sign indicating hours...

And I felt quite safe. It is well lit and there was quite a few people there. Cafe was closed though. It closes at 3pm :(

Geoff, I love your description. My first thought when I saw it was a Silon attack :D (who remembers battle star galactica?)


Sent from my iPhone using Tapatalk

Tannin
01-12-2017, 12:24pm
Take 1: It's not a Bali skyline is it? Oh no, silly me. That's a volcano, not an earthquake.

Take 2: Perfectly normal. Notice that the clouds are sharp, the movement of the buildings is caused by the rotation of the earth. Wait a bit longer and you'll get a sunrise at the eastern corner.

Take 3: Not one of those rotating restaurants, is it?

Take 4: Vertical railing? Round pipe steel? Hard to see how you'd get that with horizontal railings.

Take 5: What Dennis said.

arthurking83
03-12-2017, 7:34am
The way I'm seeing this 'moved' image is a little bit opposite to how Dennis described it.
That is, the exposure seems to have been initiated from the RHS and slowly rotated towards the left.

So the brightness of the horizontal trails indicate that for a short while the exposure was longer at the beginning of the 30sec and it was slowly rotated leftwards(note the brightness of the far RH side of each trail).
Then towards the end of the exposure there was vertical movement while the exposure was shorter.

brightness determines the length of exposure.
So I'm assuming that you originally framed the scene so that the tree(branches) on the LHS were not in in the view(but they ended up in the view).
This implies that you started towards the right(of the frame).

The brighter thicker trail lines indicate more exposure(longer time spent at that spot), so what I'd say may have happened is that you slowly rotated the camera(not realising it) which has the same effect of imaging that part of the scene for longer(brighter exposure).

Note how on each of the trails the brighter parts of each line all correspond to the same short length of each trail, so I doubt that each of those light sources were brighter for the duration off that part of the exposure and then they all dimmed for the rest of the exposure.
(ie. all light sources maintained the same brightness level!)

If you (can imagine to) break up the horizontal lines into sections of a 30 sec.
Note how there is about 1/3rd of each line that appears to be a lot brighter than the rest of the 2/3rds of each line. This implies more time spent there(ie. longer exposure there).
So for a 30sec exposure, you would have spent (possibly) 20 seconds slowly rotating the camera to the left, so that part of the line is going to appear brighter.
Then after those 20 seconds, you began to speed up the process of rotation which is a similar effect to speeding up the exposure, so the line will be less bright .. ie. a line of light will then be imaged thinner(as in your moved image).

The wavy/bumpy part right at the end kind'a make s little sense tho, expect for VR kicking in/or out, or the time spent there.
Not knowing how long each part of the line had been exposed for tho, it's impossible to explain it accurately.

It's the way exposure works:
It's common knowledge that a long exposure(say 1sec or even half a sec) even on a semi sturdy tripod can still be made blurry due to some small movement. Mirror slap/vibration is a common issue with this type of long exposure.
That is, mirror slap vibration can cause blurring of an image for this type of long exposure.
Mirror slap usually causes vibration for approximately half a sec(lets call it 500ms ... 1000ms = 1sec)

if you shoot a static object with a 1 sec exposure NOT using a mirror lockup method(which also includes exposure delay mode) then for a total of 1000ms, 500 of those 1000 milliseconds you are imaging a moving static subject. That is the camera is moving(relative to the subject) for half the time of the total exposure.
A quicker exposure eliminates this 500ms of movement. This is a well known method of eliminating vibrations and it's effects.

But what isn't well known is that a (much) longer exposure also 'eliminates' motion blur due to mirror slap. And in a sense you see the reason for that method in your blurred image.
If the previous scenario shot at 1sec was changed to be exposed for 10sec(and again we think of it in terms of ms instead), then at 10000ms, with a 500ms section of exposure being shaken AND stirred the effect on the image is completely different.
Assuming that no other movement is going to come into play, then what happens is that for 500ms you get motion(from the camera), but then the next 9500ms are going to be rock steady.

For those 9500ms the image is 'burned' into the sensor for a lot longer than the blurry 500ms. That's almost 20x more exposure of the subject not moving compared to the amount of exposure of it being blurred.
The longer exposure of the steady image overrides the shorter exposure of a blurry subject.
The end result is a sharp image, using a (much)longer exposure.

So in your image, using similar principles, the brighter line indicates more exposure.
Being brighter at the RH side implies that the image was 'burned in' for longer at that point, which I think means that while you thought you held the camera 'steady' you were in fact slowly rotating it to the left.
Note that is is just an assumption, you could just as easily rotated it left, then right then left again for a bit .. but the effect appears to be the same. On that 1/3rd(ish) part of each line it's much brighter than the LH 2/3(ish) of the line.

Anyhow, that's my analysis of what probably happened.
if that is wrong then, I'm going with Tannins 2nd point! :D

Mark L
07-12-2017, 9:09pm
Could it be as simple as this? (and I haven't read what AK posted. :( or:))
Hold the camera steady and use the self timer to activate the camera. Stops the camera movement as you press the shutter button.

ameerat42
07-12-2017, 9:56pm
Tands. You have simply captured the essence of Brizzy.
A pictorial slogan: Brisbane - city on the move!

A duller explanation would be a slight camera movement. It's the simplest one to account for it.
If you analyse the movement it would represent a tiny swing of the camera on an axis. Those
light streaks are about 1/10 of the frame, so about 2.5mm across - likely a very small rotation.

tandeejay
07-12-2017, 11:44pm
Could it be as simple as this? (and I haven't read what AK posted. :( or:))
Hold the camera steady and use the self timer to activate the camera. Stops the camera movement as you press the shutter button.

I thought I was holding the camera steady. It was a 30 second exposure. are you suggesting the entire 500px movement could have been caused at the beginning when I 1st pressed the shutter button, and not movement that occurred while the shutter was open?

I do like the suggestion that the movement is caused by the rotation of the earth :D although I still like the Silon attack idea :D:D:D



Tands. You have simply captured the essence of Brizzy.
A pictorial slogan: Brisbane - city on the move!

:lol:



A duller explanation would be a slight camera movement. It's the simplest one to account for it.
If you analyse the movement it would represent a tiny swing of the camera on an axis. Those
light streaks are about 1/10 of the frame, so about 2.5mm across - likely a very small rotation.

Yeh, I guess your right. with the lens at 80mm, what does that convert to in degrees of rotation?

I was just curious as to how I managed to get no movement in the 1st 30 second exposure, and that "huge" rotation in the second...

Moral of the story... always take the tripod

ameerat42
08-12-2017, 8:25am
...with the lens at 80mm, what does that convert to in degrees of rotation?...

AppROXimately 1.8°

Use either of asin or atan.

tandeejay
08-12-2017, 9:19am
80mm on an APC-S sensor...


Sent from my iPhone using Tapatalk

ameerat42
08-12-2017, 9:51am
(I estimated) 2.5 mm of movement >(across the sensor)< over 80mm FL...
Sensor size does not matter.

tandeejay
08-12-2017, 9:05pm
But when your trying to figure out how much the camera moved by degrees, wouldn’t the sensor size be a factor in the calculation?


Sent from my iPhone using Tapatalk

Tannin
08-12-2017, 9:35pm
Lateral movement in a plane at 90 degrees to the alignment of the lens would be directly relevant to sensor size, but rotation about an axis at 90 degrees to that same alignment is not. Rotational movement is relevant to the field of view, which in turn is determined by the combination of focal length and sensor size.

Think of it this way: rotate a camera from left to right by a degree or two. If you have a narrow field of view (a 500mm focal length on a full frame body, let's say), the shot is completely ruined. But you'd probably get away with it at 16mm. This is why the old rule about shutter speed / focal length works. (After a fashion.)

Now face the camera due south and move it sideways by a few mm while (through some magic) still keeping it oriented exactly south. For the long lens, there will be little or no effect - or rather, no effect with a distant subject. Where the subject is close to the camera, however (assume it is an ant or a flower), the effect becomes significant. (This is why Canon's latest macro lenses have a new IS system which, unlike traditional IS, compensates for lateral movement as well as rotational. (Possibly other brands do it too now. Canon introduced it with the 100/2.8L Macro a few years back.)

So, to a first approximation, the effect of rotational movement at 90 degress to the axis of view depends on field of view (the wider the better), while the effect of lateral movement depends on distance to the subject.

(Written as if I am speaking ex cathedra, which I ain't. I'm just thinking about the geometry and doubtless making a mess of it.)

Edit: come to think of it we can demonstrate the second proposal with a simple thought experiment. Imagine you are sitting on a fast-moving train and looking out the window. The bushes on the trackside a few feet away are just a blur to you, but the mountains in the distance remain perfectly clear.

ameerat42
08-12-2017, 10:08pm
But Tannin, your theory falls flat because nothing can exceed the speed of light.
(Unless, of course, you first switch off that light, and then you can walk faster:nod:)
--Just a thought:p

Tannin
08-12-2017, 10:14pm
Actually, darkness can exceed the speed of light. Think about it ... no matter how fast the light travels, the darkness always gets there first.

Ross M
08-12-2017, 10:16pm
I can't contribute much after previous posts have given some impressive analysis. I would back up the conclusions based on my experience at similar exposure times both with and without vibration control enabled. In other words, I forgot to turn it off. The result was a slightly softer exposure. By the way, you did extremely well to capture the first one without a tripod.

ameerat42
09-12-2017, 8:05am
Actually, darkness can exceed the speed of light. Think about it ... no matter how fast the light travels, the darkness always gets there first.

That's top logic. I shoudder seen it. -- But then the light was off :(

arthurking83
09-12-2017, 2:12pm
But when your trying to figure out how much the camera moved by degrees, wouldn’t the sensor size be a factor in the calculation?


Sent from my iPhone using Tapatalk

Yep! sure does.
In fact, you don't need to know what camera, sensor size or focal length was used in the image that was ruined either.

if you can calculate the number of pixels alone(ie. crop the image from one edge of the light trail to the other edge to determine pixels) and divide that number into the number of pixels across your sensor, then you have a good approximation of how much you moved the camera from one side to the other.

D5500 has 6000 pixels on the long side.
lets say you moved the camera 500pixels(just for ease of calculating numbers) .. you moved the camera by 8.3% of it's FOV with the lens used.

Where the focal length comes into play is to place the amount of movement into context(like Tannin wrote)
80mm on a Nikon APS-C sensor(note is different to Canon's 1.6x multiplier!) is equal to about 17° total FOV at infinity.

** ignore this part if you don't really care! :D ** But do note that infinity can be an important point, as some lenses shorten focal length(and hence FOV) as they focus closer .. ** just for total clarity

so if you moved the camera 8.3% of the total frame capture, then you must have moved it 1.4° laterally to achieve the image you captured.

If you had moved the camera an insignificant amount such as 1.4° with a much wider angle lens(say 16mm) this would have amounted to about a 1.9% amount of movement for the sensor/lens combo.
1.9% of 6000 equates to only 118 pixel trail on the full image. basically 1/5 of the same amount of trailing you got. Not impossible to get away with(as Tannin wrote) ... but much less obviously trashed.


What I alluded to in my (tedious)first reply was that I'm more curious as to how you moved the camera.
How, in reference to the technical result, rather than as an expression of exasperation .. ie. 'how on earth!'(didn't you keep it steady).

Note the two indicators that almost display what I'm looking at:
1. the tall building with the two vertical columns of light
and
2. the two triangular shaped bridge lighting movements.

The light trails don't really show the type of movement that occurred, as much as the two points above explain.
The movement appears to have been 'stepped', because with those two indicators, you can see definitive shapes as well as trails of lighting in the shapes.

In the columns of light you can see multiple vertical strips(hard to count), but in the bridges triangles, you can easily distinguish at least 4 movements, and possibly 5 triangle shapes with trailing between each.
So you're movement wasn't a gradual smooth panning action, it must have been slow panning, stopping for a sec or 8 at a point, then more gradual panning stopping again at each of those points.

ps. Tony is correct that dark travels faster than light.

lemon juice won't help you rob a bank! (https://youtu.be/HtB8fn1waVU) :D
link goes to Vsauce on Youtube

.. if you've never seen Vsauce videos, you've totally missed out on some awesome geek info that you just didn't know you always wanted to know ;)
(and if you don't watch the entire video .. which I recommend you do! .. at least go to 8:50 for a bit of a :lol:)

tandeejay
09-12-2017, 2:38pm
more curious as to how you moved the camera.
How, in reference to the technical result, rather than as an expression of exasperation .. ie. 'how on earth!'(didn't you keep it steady).

Note the two indicators that almost display what I'm looking at:
1. the tall building with the two vertical columns of light
and
2. the two triangular shaped bridge lighting movements.

The light trails don't really show the type of movement that occurred, as much as the two points above explain.
The movement appears to have been 'stepped', because with those two indicators, you can see definitive shapes as well as trails of lighting in the shapes.



That is exactly what was on my mind... How did the camera move? I thought I was holding it very firmly on the solid rectangular tube steel railing.

coming back to the vibration reduction, is it possible that the VR mechanism might have been causing slight vibrations of the camera body sufficient to overcome the friction between camera and steel?

here are a couple of 100% crops at the width of the movement.


133727

133728

133729

ameerat42
09-12-2017, 2:48pm
(I estimated) 2.5 mm of movement >(across the sensor)< over 80mm FL...
Sensor size does not matter.


But when your trying to figure out how much the camera moved by degrees, wouldn’t the sensor size be a factor in the calculation?


Sent from my iPhone using Tapatalk


Yep! sure does...

So, a disagreement. My argument is based on the simplest trig required to explain it.

Where I might be wrong is the estimate of the streak length. Nevertheless, that streak length
will not matter for a given focal length, whether the sensor is a crop on or not. Just the field
width is different.

Anyway, AK, I couldn't see in your reply where you showed that the sensor size does matter.

On the matter of the bridge, are you saying that it does not appear to have moved the same
distance as the other streaks? I think it has.

tandeejay
09-12-2017, 2:58pm
No, I think AK was saying the bridge is showing that the movement wasn't smooth...

- - - Updated - - -

or. the lights on the bridge were flashing? :D

ameerat42
09-12-2017, 3:01pm
...or. the lights on the bridge were flashing? :D

Ahh! Clearly, an ambulance was crossing at the time:nod:

tandeejay
09-12-2017, 3:41pm
Actually, the truth is, I was trying to get a photo of a pelican for the Panning weekly challenge :nod:

https://farm5.staticflickr.com/4640/38891212172_0d4a750d6a_b.jpg (https://flic.kr/p/22fFCqU)
Pelican Brisbane Panning (https://flic.kr/p/22fFCqU) by John Blackburn (https://www.flickr.com/photos/99216317@N08/), on Flickr

arthurking83
10-12-2017, 8:18am
No, I think AK was saying the bridge is showing that the movement wasn't smooth...

..... [/COLOR]

or. the lights on the bridge were flashing? :D

Doh! didn't thunk of that possibility :lol2:
Yeah, if there are concentrated spots of light, then either the light is variable in strength, or the movement over time wasn't smooth


Actually, the truth is, I was trying to get a photo of a pelican for the Panning weekly challenge :nod: ..

As Harry Hoo would say .... "two possibilities!"
I'm surprised you cloned the pelican out to begin with :p


So, a disagreement. My argument is based on the simplest trig required to explain it.

Where I might be wrong is the estimate of the streak length. Nevertheless, that streak length
will not matter for a given focal length, whether the sensor is a crop on or not. Just the field
width is different.

....

It should.
Like Tanning said, the length of the streak will depend on the FOV and the amount of movement.

So a camera with a 5° FOV with a 500pixel streak where the sensor is 5000pixels wide needs a lot less lateral movement than it needs for the same 500pixel streak with a lens that shows 50° FOV.

Think of it in terms of star trails, or blurry moon.
With start trails, using a 12mm lens, for a 30sec exposure may only give you a 5 pixel trail.
Using a 1000mm lens instead will render more like 100pixels of trailing.

For the purpose of explanation(ie. not real numbers) lets say the earth rotates 1° every 30sec, we know it's less, but for the sake of simplicity!

12mm lens gives a 120° FOV
1000mm lens shows a 1.2° FOV
the earth still only moves at 1° every 30sec.

12mm lens was only moved 1% of the frame, 1000mm lens moved by almost 100% of the original frame.

I think(from memory) it only takes about 2mins for the moon to start from one corner of the frame to the other corner of the frame when using an 800mm lens on a 135 format frame.
It may be 5 mins. I just can't remember, but I did the test a while back but it is very quick.


But in a technical sense, you're right Am ... a streak doesn't rely on the focal length or sensor alone.
But in John's situation here, we know it's an 80mm focal length set, and the sensor is a 1.5x crop .. and that it has 6000pixels on the long axis(with 500 pixels of movement).

If we didn't know the FOV value, then we couldn't calculate the amount of movement effected.
That is, we don't know the FOV in the image unless we know the sensor lens relationship.
Remember that the frame can be made the same whether a 10mm lens is used, or a 1000mm lens is used(you just need a lot of room to move about! :p)
The major difference will be perspective(compression/extension of the varying elements in the scenes).
But you can get the framing the same. What we can't work out just from the image is a value for the FOV, which then gives us a percentage amount of movement, and eventually how much the camera moved.
1.4° is very little movement, so hard to detect in some situations.

I can't imagine the VR mechanism would cause camera movement, and I din't think it moves by that much(ie. 10% of the frame).

One thing I reckon I can be sure of tho... I don't reckon you could replicate that same rendering if you wanted too using the same method.
You probably could do so, on a tripod tho(with the panning doodad).

Could have been something as simple as something (very small stone or twig or something) getting stuck between camera and railing, and causing movement whilst you had some pressure on the camera to keep it from falling into the abyss.
if there was any hint of wind too, then the force of the wind on you and your hand could have masked the movement too.. remember only a very small amount of movement there.


Anyhow, I reckon it was the (cloned out) pelicans fault. Is there any phosphate compound, possibly originating from a pelican, on the camera or lens! :D

feathers
10-12-2017, 8:26am
I like the first image, as well as the flat lining in the second:D

tandeejay
10-12-2017, 8:30am
In relation to the VR, I was not particularly thinking of the VR mechanism creating that much image movement, but of vibrations in the camera body caused by the VR mechanism attempting to compensate for a lack of camera movement that might have caused the camera body to move?

There must have been some breeze as the branches in the tree show movement that is definitely not linked to any camera movement.


Sent from my iPhone using Tapatalk

tandeejay
10-12-2017, 8:32am
Also, if all you have is the photo can’t you figure out the FOV if you know the location of the camera and some of the landmarks In the image? Plot the points on the map, and then just measure the angle :D


Sent from my iPhone using Tapatalk

ameerat42
10-12-2017, 9:47am
To continue this discussion, it is much easier to use text color to differentiate between what AK said
(or which I have left out as "...") and what I am saying...


It should.
I've forgotten what this relates to.

Like Tanning said, the length of the streak will depend on the FOV and the amount of movement.
Not disputed, but not exclusive.

Think of it in terms of star trails, or blurry moon.
With start trails, using a 12mm lens, for a 30sec exposure may only give you a 5 pixel trail.
Using a 1000mm lens instead will render more like 100pixels of trailing.
Utterly agree with this.

But in a technical sense, you're right Am ... a streak doesn't rely on the focal length or sensor alone.
No to the first bit, and yes to the second.

But in John's situation here, we know it's an 80mm focal length set, and the sensor is a 1.5x crop .. and that it has 6000pixels on the long axis(with 500 pixels of movement).
Not that anything but the FL really matters.

If we didn't know the FOV value, then we couldn't calculate the amount of movement effected.
Not necessarily, as long as we had other parameters, like FL and frame size, as we have here; and anyway, the assumption here is
that it's the whole frame shown. (Tands did not say otherwise and this is what I based the streak length on.)

That is, we don't know the FOV in the image unless we know the sensor lens relationship.
Yes!/Congruency!/At one!/Etc!

Remember that the frame can be made the same whether a 10mm lens is used, or a 1000mm lens is used(you just need a lot of room to move about! :p)
The major difference will be perspective(compression/extension of the varying elements in the scenes).
You must mean the scene in the frame. But the perspective will only change if you change the subject distance, not
the focal length. I assume you mean you are able to dramatically change the sensor size while changing the FL from
10mm to 1000mm.

...
What we can't work out just from the image is a value for the FOV, which then gives us a percentage amount of movement, and eventually how much the camera moved.
You can using sensor size and lens FL, using simply "similar triangles".

1.4° is very little movement, so hard to detect in some situations.
For situations of extremely wide-angle lenses, maybe, but 1.4° is ~ 3 full-moon diameters. From an F=80mm lens
(using similar triangles or sine or tangent for such a small angle) it would show as ~1.95mm on the frame - no matter
how big that frame.
...

To summarise: I'm just using the simplest approach that will explain it:confused013
(And now my throat is dry!)

arthurking83
10-12-2017, 8:28pm
So, I'm reading that we have both agreemement and disagreement in a few pockets of resistance. :p

You said:


.... Nevertheless, that streak length
will not matter for a given focal length, whether the sensor is a crop on or not. Just the field
width is different.

....

Are you referring to the streak, or the image itself(unclear)

I said:


...

It should.
Like Tanning said .....

the assumption is that there won't be a crop, and that the pixel dimensions will therefore be known.
Otherwise it's incalculable!(close to it).
FOV depends on the lens/sensor relationship. 80mm lens FOV is narrower than 80mm lens on full frame.
Back onto the crop point before this, if the sensor was 135 format, and the crop was definitely known(eg. to the common 1.5x APS-C format) then FOV can easily be calculated.

I wouldn't rely on estimations of FOV going by fixed points in a scene. For this instance, it may work with a degree of error .. but not guaranteed to work in all cases.
As this now turned into an extended discussion, and not only for specific assistance to the original post .. then we should try to keep the topics applicable to more situations.

Perspective doesn't change with sensor size, only with lens focal length.
**Many modern lenses change focal length with focusing, so focusing can also affect perspective, but an ideal lens(that doesn't change focal length with focusing) won't change perspective either.

What wouldn't matter (in this topic of movement vs FOV) in terms of sensor 'size' is pixel density.
Had the camera been a 6Mp(2000x3000) type, then what would have been different would have been the number of pixels from one end of the steak(s) to the other.
ie. half again(from 500 down to 250).
The movement was simply that the camera move X number of degrees, or a percentage of the FOV.


as a hypothetical: lets say Tandee created those 500 pixel streaks in photoshop, to simulate movement, then on the 6000pixle frame that equated to about 8% lateral rotation.
Had the sensor been the 6Mp 3000pixel across type, and using the same 500pixel streak effect, then the percentage of 'movement' would then amount to double that lateral movement(ie. 16%).
When I referred to the 10mm vs 1000mm scenario I meant actually moving the camera/lens to simulate the save FOV at infinity.
So I suppose in theory we could determine FOV without any idea as to focal length and sensor size, and rely solely on some reference points in an image, but if you know the coordinates of those reference points. Not easy to do in every situation .. but doable(I suppose).
An arduous task, at best! with most of my images which involve many flat featureless outback scenes tho! ;)



.... For situations of extremely wide-angle lenses, maybe, but 1.4° is ~ 3 full-moon diameters. From an F=80mm lens
(using similar triangles or sine or tangent for such a small angle) it would show as ~1.95mm on the frame - no matter
how big that frame.

Can't work.
Imagine two extremes:

1. 80mm lens on a smartphone sensor. Would equate to a 3000mm lens on a full frame camera. Point that at the moon and you won't get the moon edge to edge(horizon to horizon).
2. 80mm lens on an 8x10 view camera(if you could possibly ever locate one .. I think non existent). Would make that lens an UWA type lens on that 'sensor size' if such a lens was possible. I'd guess at least a 120° FOV.

So while an 80mm lens on <whatever sensor> may allow 3 moon diameters within 1.4°, on a smartphone it'll be more like 0.5 moon diameters for the entire frame, and possibly 10 moons in that same 1.4° of angle on the view camera sensor.

This is why the topic of focal length and sensor size 'cropped up'(pardon the pun) .. just an easy way to determine FOV, then having worked out percentage of movement of the camera(via the streak length) .. it was easy to show that the camera ... 'barely moved' in relation to the operator, but much more obviously in relation to the rendered image.

ameerat42
10-12-2017, 8:35pm
Phew! Let's call a break. There's enuff info here for people to wade through and conclude for 'emselves.

Maybe in a day or two I'll put up a diagram to illustrate an otherwise 1000 words. I've only persisted in the discourse
so far because it's not something that can be just left aside.

As Walter Concrete may have said, "Cementics can get in the whey!"

Are you happy to take a breaver?:p

tandeejay
10-12-2017, 8:49pm
Wow! I never imagined my image would generate so much discussion!

Might need to win the Creative processing challenge so I can put it in there :D

ameerat42
11-12-2017, 8:21am
Wow! I never imagined my image would generate so much discussion!

Might need to win the Creative processing challenge so I can put it in there :D

Ah, so that was it?!! You had it in the wrong section:p

Mark L
13-12-2017, 9:06pm
:confused013:confused013