PDA

View Full Version : The "LIGHT L16" Camera



Kazuumiao
11-10-2015, 1:46pm
Just wondering any thoughts on this new gadget? cause it sounds and looks interesting :camera:

Gizmag Source http://www.gizmag.com/light-camera-combines-16-sensors/39764/
Original Source https://light.co/

David :)

http://cdn03.androidauthority.net/wp-content/uploads/2015/10/Light-L16-Android-camera-front-and-back-840x454.jpg

*image removed and replaced with a link. As per the site rules, you cannot post photos on this site that you do not own copyright to : Admin*

ameerat42
11-10-2015, 2:02pm
First link: lots of words, little talk, but not much info.
2nd link: lots of smiles, lots of cut-tos, lots of suggestion, and almost no info.

Opinion: need better links.

- - - Updated - - -

PS: This excerpt from the text on the 1st link:

...that takes photos the quality of a 52-megapixel DSLR,...

dozen mean much.

ricktas
11-10-2015, 5:28pm
Interesting idea, but it still ends up being really tiny sensors that are capturing the image, with really tiny individual pixel sites. Good for those that are used to shots from a sensor in their phone, but I reckon most serious photographers who are used to APS or FF sensors will soon pick up the reduced light gathering these tiny sensors have over a DSLR or mirrorless sized sensor.

Though, that is also not a bad thing, as more and more people have cameras in their phones or now this, rather than a traditional 'camera', we will see a defining of a photographer, based on gear. "Oh you have a proper camera, you must be into photography".

And at $1699 USD, (which right now is about $2315 AUD) you could get a great DSLR or mirrorless and a couple of lenses.

ameerat42
11-10-2015, 5:38pm
Just a thought. Are there any like gadgets already out there? If so, google them and have a look
at the image quality. As it is, it seems more like a "lifestyle thing" than a serious photographic device.

--Just for comparison, I have a 12 MPx phone cam in a not-so-modest phone. One day I found myself admiring
the panorama from Balls Head Sydney (~Mt Coot-tha view). I eagerly took a series of shots and later stitched them.
Well, what a moan I uttered at the resulting quality (of the examined single images). I would have got more detail from a
lesser pixelled standard camera (and I know because I did equivalent views with a series of 4.7MPx images from an APS-C camera).

This is just saying, though, and who knows, they may have improved things out-of-sight:cool:

Ie, do some research, particularly images.

I @ M
11-10-2015, 6:04pm
I predict ultimate failure because ( as far as I can see ) it doesn't include a phone.

To me it appears that the type of person who would be interested in this is the type that clutch their smartphone 24 hours a day. Their phone has a camera and often 2 lenses or 2 cameras and they simply won't be interested in carrying both a phone and a camera with them at all times or even sometimes.

Novel idea if it works well but unlikely to catch on in a commercially viable manner.

EDIT, I didn't watch the film but it appears from the text to have unsociable media sharing capabilities so maybe more than a handful of people will buy one but at the asking price it would want to do all and then some more of their hype + out banana the newest, latest and greatest phones that are available with 1/2 way competent cameras at around 1/2 to 1/3 the price.

swifty
14-10-2015, 3:12pm
This will be the future of photography :P

I say that with my tongue firmly in my cheek of course.
But IMO there are some truths to that statement.
Not the actual L16 camera (which I think has a bunch of issues like the Lytro) but computational photography will take over mass photography in the future. And by mass photography I mean consumer smartphone photography that dominates by volume.
I think we're not far away from seeing multiple sensor with multiple lens units appearing in smartphones but the implementation will be a bit more elegant and miniaturised compared to the L16 as to fit in a standard smartphone. And through some genius algorithm, image quality will be much improved. Whether this computational way of deriving the image constitute as photography in a 'pure' sense I can't say, but to the average consumer it doesn't matter as long as the image looks better.
But I hate this 'DSLR quality' marketing term that keeps getting used which is so arbitrary and unspecific.

I think this computational imaging technique will also have implications for us enthusiast photographers. Deconvolution is already popping up in post processing techniques. We might see more 'sensors' added to future enthusiast and pro cameras that record a bunch of other meta data that can be used to improve the IQ of the primary imaging sensor.

arthurking83
15-10-2015, 12:41pm
It'll have some initial curiosity value for a short while, but ultimately the product will recede into oblivion once this initial wow factor calms down.

Same thing happened to the Lytro(focus after shot tech) and the Nokia Lumia 1020(40Mp camera).
For a short while after their release, every second news article, or blog, or image posting on the net was about one of those devices due to a unique feature.
But once that curiosity value subsided, you don't get many postings about the images any more.
I dare say most of those folks that purchased at the beginning have all laid those devices drawer, and haven't used them in a long while.

I agree that computational imagery will come to smartphones, I think the biggest issue is the format that the images will be saved with.

That is, for example .. the Lytro uses it's own proprietary format, which is a right pain to view the images easily.
You either need a web browser to view the images off Lytro's site directly, or some software that allows viewing of the images with their unique abilities.
Not really ideal!

Imagine if Canon's cameras only saved in .cpg format, and Nikon's cameras only saved in .npg format, and you required specific Canon/Nikon software respectively to view those images and no other image format was was possible from those manufacturers!
I doubt that anyone would bother with trying to view the images considering the muck around that requires downloading, installing and using some proprietary software/app just to look at images!

As far as I can tell, no universally ratified format has been established for viewing these (so called) computational images .. and any derivative standardised format images are no different to any other image form any other device, so the feature is lost on the product.

So my prediction of this product will be:
An initial short lived burst of spurious entertainment, followed by a quick descent into oblivion.

swifty
17-10-2015, 1:20pm
I think they would all output jpegs, no?
But yes, with all this additional info, there would need some other form of file format that the user on their native device can manipulate.
But posting on various media should be an easy button click into a universally used format such as JPEG or gif?
Actually I haven't looked it up but I'd imagine that's how the Live Photos on the iPhone 6s must do it.

Mark L
17-10-2015, 10:12pm
I'm interested if it helps me take photos like this (but who wants to do that and the market will be for people that don't want to take photos like this I suppose),

120758

arthurking83
17-10-2015, 10:40pm
I think they would all output jpegs, no?......

Lytro use some weird format for viewing the dynamic 'stacked' images that they're supposed to be famous for.

Apparently you can convert them to jpg too, but then it becomes a standard jpg file and you can't 'change focus' whilst viewing the jpgs.

It's makes sense for makers to output to a dynamic format like gif or png or whatever allows dynamic image content at reasonable quality .. but you reckon they'll do it! :rolleyes:

This mob will use some proprietary file format(for the dynamic images) and lock you into viewing the images via their viewer software via their website or something stupid like that.

hopefully this won't be the case and that some sort of almost ratified file format is being nutted out as we speak .. and my posts will sound way over the top cynical :p

arthurking83
18-10-2015, 8:37am
Just watched a video featuring the CEO explaining a little more about it's workings.

The final image format is simply a raw file(or jpg) file that is processed using computational stuff prior to the final file.
So that once the image is finalised, it's not like the Lytro where it can be altered to display a different focus plane, it's fixed.
But what it apparently does do is allow you to alter the perspective during image post processing(or in camera).

So basically it takes a bunch of shots using the various cameras/lenses, you then set the perspective of the image as a part of the post processing process .. which is probably done in Light's software/app .. or whatever.

Sounds a bit more interesting with that info now made available, but it's still seems to be weighted more towards the P&S/smartphone camera user than it does to a DSLR/mirrorless user.

swifty
18-10-2015, 6:08pm
The final image format is simply a raw file(or jpg) file that is processed using computational stuff prior to the final file.
So that once the image is finalised, it's not like the Lytro where it can be altered to display a different focus plane, it's fixed.
But what it apparently does do is allow you to alter the perspective during image post processing(or in camera).

So basically it takes a bunch of shots using the various cameras/lenses, you then set the perspective of the image as a part of the post processing process .. which is probably done in Light's software/app .. or whatever.


This is what I guessed the workflow would be like. Take the shot, manipulate proprietary 'RAW' file on a native device, then output in a universally accepted format like jpeg or possible gifs where there might be a short 'animation' running point of focus from near to far.
If they are smart, they'd have popular social media integration like output directly into instagram after initial perspective/focus point manipulation on the native device.


Mark: I very much doubt they had that in mind when designing this camera.
But lets say future DSLRs have gyro's and other motion sensors that detect camera movement/rotation during the capture and this was recorded in a metadata file. Then in post processing they could use that info to help deconvolute more accurately and help compensate blurring due to camera movement as oppose to subject movement.