Heavy post-processing of images, even without Master AI? - Huawei P20 Pro Questions & Answers

Hi,
Took this picture this morning, with Master AI disabled.
https://imgur.com/a/EsMsf
See how the area around the actual sun looks like Teletubbies sunrise due to post processing? The sunrise I tried to captured was nice, but also actually on earth. I'd like the picture to reflect that.
Any way to disable this post processing? I saw various reviews saying the same about some beaty-filter on portraits even with AI/filters disabled.

Shoot in RAW if you want to completely remove processing.
Short of that there are quite a few other things you can to gets pics more to your liking. Selecting focal point and metering point can have a big impact on the final result, as can very quick adjustments in editing. Checking the default settings for things like contrast and adjusting there also worth look.

Related

Improve Camera Quality in Automode?

Hello,
i just got my hands on the Z3 Compact and took it out to test it on a sunny day.
Back home i noticed that the picture quality is really bad in auto mode!
I made a quick comparison picture to my old phone: a Xiaomi Mi2 (not the S Model) with 8m Camera.
The picture shows the text quality of the Xperia Z Ultra Power Pack which is the best example i can do now.
The automode settings are: ISO-800, F/2, 1/50 Sec, no flash
The Mi2 automode settings are: ISO-488, 1/16 Sec (no data on the F), no flash
Directlink:
http://abload.de/img/neuebitmapoksb8.png
And here the same with manual mode and a lower ISO (100 instead of 800 that was used in auto mode):
The complete settings were: ISO-100, F/2, 1/8 Sec, no flash
Directlink:
http://abload.de/img/dsc_0099emqqy.jpg
Heres another example of a picture i took when i was outside (without zoom, i just cut away some parts to make it smaller):
The automode settings here: ISO-50, F/2, 1/320sec, no flash
Directlink:
http://abload.de/img/dsc_0036lujf6.jpg
Any idea why automode causes such very bad picture quality? Any ideas on how to improve it?
Thank you for help!
Why bother so much about the auto mode? You can take such great pictures in manual mode when you play with the settings. In the end auto mode will never be great, because it does what it says: auto mode, it adjusts the settings to what it thinks is best in each situation.
Playing with the manual mode will also give you more knowledge of basic photography.
Sent from my D5803
Auto mode became better over time on the Z1C, I guess they'll keep improving it. Dunno if they made a step back here.
Dsteppa said:
Why bother so much about the auto mode? You can take such great pictures in manual mode when you play with the settings. In the end auto mode will never be great, because it does what it says: auto mode, it adjusts the settings to what it thinks is best in each situation.
Playing with the manual mode will also give you more knowledge of basic photography.
Sent from my D5803
Click to expand...
Click to collapse
Auto mode will always be handy.. no one wants to mess with settings most of the times.. its a phone camera and if i want manual controls i would pick a dslr. Thats why iphone wins in camera department. Take it out and snap a pic instantly with great output. Even on my galaxy s5 i take pictures on auto and i havent seen anyone setting things up manually each and every time to take a damn photo
Auto mode uses a technique called oversampling to gather information with the 20MP sensor, then heavily processes the photo to whatever the software (Sony) decided was best (post-processing). The idea is you get the detail of a 20MP sensor, in a auto-corrected and down-sized 8MP resolution photo. Oversampling is also why the Z3 has a small amount of "lossless" zoom. (ever tried "zooming" with other phone cameras? It usually leaves you with a terrible blob of digital noise)
As with any automatic post-processing, there are pros and cons. The truth is, the software doesn't really know what you're taking a picture of, so it tries to give it's best guess on correcting exposure, colour, noise, etc. The result you're seeing in the auto-mode photo is a result of heavy post-processing (Noise Reduction), bad focus, and camera shake.
The reason your "manual" photo is better is because manual mode drops the post-processing. It also looks like you were able to hold the camera steadier for the manual shot.
"Auto" mode is far from perfect, but it will often save you more times than you know. Over time, you'll learn the strengths and weaknesses of "auto mode", and you'll know when you need to switch to Manual for the better shot. Auto mode can also be easily improved upon via software updates.
PS: A little trick I use to minimize camera shake while taking a photo is to set a quick 2 second self timer. This will allow you time to press the shutter button and then stabilize the phone for minimal "camera shake"
I have read the z3 Compact camera is great, great, great...but yeah I have been grossly underwhelmed by the auto mode. The auto mode is THE mode...sure have a manual mode if you want...if you have time. But I use my phone for quick snaps...QUICK being the operative word. I want to pull it out aim and shoot. My iphone5 took very acceptable pictures. The z3 compact has shown me grainy, bland looking shots in auto.
I don't get why auto mode isn't the most important mode for designers. It's a phone...not a camera...so make the auto mode work
Yeah, camera is definitely underwhelming. That being said though, it's better than most. My Moto X took absolutely horrid shots for the most part.
Crewville96 said:
Yeah, camera is definitely underwhelming. That being said though, it's better than most. My Moto X took absolutely horrid shots for the most part.
Click to expand...
Click to collapse
Coming from 2 years on the iphone5...I was under the impression that camera technology was pretty well mastered across the board. iphone makes it look easy. There's even an annoying lag between pressing the button and the shot being taken on the Z3...what the hell is up with that?
Eclypz said:
Auto mode uses a technique called oversampling to gather information with the 20MP sensor, then heavily processes the photo to whatever the software (Sony) decided was best (post-processing). The idea is you get the detail of a 20MP sensor, in a auto-corrected and down-sized 8MP resolution photo.
Click to expand...
Click to collapse
Well, my z3c is still on the way. I have a question. Will the 8MP pictures I take in Manual mode be oversampled as well?
Because the sensor is still 20MP and if I manually adjust it to take only 8MP pictures what advantage do I have in having a sensor that is 20MP? I never print photos. Only watch it on my phone, laptop or my LED tv which is 1080p and 50" screen. I don't want photos that are too big in file size unless it is benefiting me in some sense other than for the purpose of printing and viewing in very large resolutions. I see that oversampling in auto mode is benefiting from a 20MP sensor but is that the case if I take 8MP pics in manual mode?
coolmalayalee said:
Well, my z3c is still on the way. I have a question. Will the 8MP pictures I take in Manual mode be oversampled as well?
Because the sensor is still 20MP and if I manually adjust it to take only 8MP pictures what advantage do I have in having a sensor that is 20MP? I never print photos. Only watch it on my phone, laptop or my LED tv which is 1080p and 50" screen. I don't want photos that are too big in file size unless it is benefiting me in some sense other than for the purpose of printing and viewing in very large resolutions. I see that oversampling in auto mode is benefiting from a 20MP sensor but is that the case if I take 8MP pics in manual mode?
Click to expand...
Click to collapse
By selecting 8MP in manual mode, all you're doing is resizing the photo from 20MP (post processing). The sensor will always capture at its full resolution.
If you know you only want a 8MP photo, there is a small benefit in resizing the photo at the phone:
The first benefit is obviously file size, but before I get into the second reason, I need to explain something first: A picture that has been converted to JPG is considered to be post processed. The compression that the JPG engine performs means your image loses details and thus has been altered. I know I said above that Manual mode means the image isn't processed, but I really only said that for the sake of explaining things easier. The average user does not consider JPG compression as post processing and they probably don't care to know. The truth is, unless Sony allows us to capture images in RAW format, the act of converting all our images to JPG means our images are all being post processed whether we like it or not. The difference between manual and auto mode is really about "how much" post processing occurs. In manual, Sony is most likely just compressing to JPG (and probably lens distortion correction but I won't get into that now) , and not applying corrections like noise reduction.
As for how it may be beneficial to resize at the phone; JPG compression is usually the final step in post processing. So by resizing at the phone, the theory is the image is captured in [email protected] > resized to 8MP while still in RAW format > compressed to JPG.
This means you benefit from the photo being resized before it is "post processed". In theory, this method should leave you with a higher quality 8MP photo versus resizing from a computer. Resizing from a computer means you're applying post processing to an already "post processed" photo.
For the average user, 8MP is more than enough, however, this is not to say all phone cameras should come in 8MP. Keep in mind that there is a big difference between an image captured by a 8MP sensor vs being captured by a 20MP sensor and then resized to 8MP. The 20MP sensor can capture much more detail with proper/sufficient lighting.
@wooki (OP):
Especially the first comparison "Xperia Z Ultra Power Pack", the one you made with the Z3C is nothing but blurred. So what is it you're trying to show/compare? I mean, yes, may the Z3C doesn't come with the best camera on the market, and yes, the "Auto mode" does not always provide the best results. Not really sure you're into photography or not, but what can be expected from a lens not even half the size of a fingernail? Not too much, right?! Get an SLR with decent lenses and a full frame sensor if you need more/better.
However, the attached fotos were one of the first ones I made with the Z3C (in Auto Mode) and think it's quite ok. No processing, just resized them.
@sxtester
I was comparing my Z3C with my old phone (a 2 year old Xiaomi Mi2) which seems to have a very good auto mode. Was just asking if i was the only one who has had a bad automode experience and if someone knows how to improve it.
How do your pictures look like without resizing?
As i'm owning a WQHD Screen all my automode pictures look very bad!
I don't want to set up the manual mode every time i want to take a picture, this phone has a shutter button to make fast pictures and with the setup phase i lose time even if manual mode gives me excellent pictures.
Eclypz said:
Auto mode uses a technique called oversampling to gather information with the 20MP sensor, then heavily processes the photo to whatever the software (Sony) decided was best (post-processing). The idea is you get the detail of a 20MP sensor, in a auto-corrected and down-sized 8MP resolution photo. Oversampling is also why the Z3 has a small amount of "lossless" zoom. (ever tried "zooming" with other phone cameras? It usually leaves you with a terrible blob of digital noise)
Click to expand...
Click to collapse
The manual mode uses oversampling as well, if you select a lower resolution. I've compared an auto mode shot with a manual mode shot of the same scene, and both were equal in terms of details and noise. The main difference was that the auto mode shot looks far worse because it tends to use that horrible HDR which just washes out the photo and ruins the contrast to near non-existence. I find that "multi" light metering mode, selectable in manual mode, gives far better results than HDR on this phone.
---------- Post added at 07:32 PM ---------- Previous post was at 07:20 PM ----------
Auto:
http://i.imgur.com/er38iZn.jpg
Manual:
http://i.imgur.com/Oqwl3KE.jpg
---------- Post added at 07:47 PM ---------- Previous post was at 07:32 PM ----------
Furthermore, the pictures from this phone's camera would look a lot better if Sony used a better algorithm for their oversampling.
Here is a comparison between a 100% crop of an image taken using Sony's oversampling (8mp) (former attachment) and a 100% crop of a photo taken at 20mp, and then downsampled to the 8mp dimensions using Irfanview (latter attachment):
I agree....Sony's software is lacking compared to everybody else. Auto mode seems kind of hit or miss. Their camera sensors are excellent, as I believe the iPhone uses a Sony sensor, but the difference being Apple is able to process better looking images with their software. I also have a iPhone 5s, and I must agree that 7/10 times, I'll get a better looking image from the iPhone. In terms of capturing details, i think Z3C is better (as expected), but all my images from the Z3C are on the "red" side when shooting in auto-mode. In the end, the iPhone comes out with the better looking photo because I'd much rather have better colour re-production over slightly more detail that you wouldn't even notice unless you had a photo to compare against.
I still think the Z3C's camera is on par with the best from Samsung's Galaxy S5 and LG's G3 (Sony sensor). It's way better than my old HTC One M8's "ultrapixel".
On the Android side of things, I think Z3C is still top 3, and Top 5 in the Smartphone world (iPhone and Lumia above it).
wooki said:
@sxtester
How do your pictures look like without resizing?
Click to expand...
Click to collapse
@wooki:
Here you go, all unedited made in Auto Mode:
http://imgur.com/uMiM0Sh
http://imgur.com/0mYsf5U
http://imgur.com/vJ32fjT
http://imgur.com/8g7oJD7
degraaff said:
Here is a comparison between a 100% crop of an image taken using Sony's oversampling (8mp) (former attachment) and a 100% crop of a photo taken at 20mp, and then downsampled to the 8mp dimensions using Irfanview (latter attachment):
Click to expand...
Click to collapse
Sony's approach looks way better because it doesn't blur that heavily. If I want to blur away all the details, I can still do that myself.
This is a bit off-topic, but I don't really want to start a new thread just to ask such a silly question.
I've been playing with the camera app some more and is there seriously no "rule of thirds grid" in Sony's Camera app? I often like to use the grids to assist in making sure my shot is straight.
Iruwen said:
Sony's approach looks way better because it doesn't blur that heavily. If I want to blur away all the details, I can still do that myself.
Click to expand...
Click to collapse
Really? Sony's approach is full of over sharpening artifacts and -auras, doesn't look better at all IMO.
One dumb quetion.
If i use another camera APP, it will improve the photo quality?
point_pt said:
One dumb quetion.
If i use another camera APP, it will improve the photo quality?
Click to expand...
Click to collapse
It depends. I choose CFV-5 and PNG image saving (rrather than JPG) and it looks much better then Superior auto, and sometimes better than Sony's Manual mode.

Shooting in good light – get the best image quality out of your Note 4!

NOTE: currently, as of KitKat 4.4.4 and firmware version NK4 (Snap805) / NK5 (Exynos) and all versions before, this article only applies to Snapdragon 805 users. Exynos users in no way can improve the image quality of their shots and are, consequently, advised to use the stock Camera app. Consequently, they won't learn much from this article either.
Introduction
This article only concentrates on getting the best possible image quality while shooting in GOOD light, that is, when the phone can use as low sensitivities (ISO's) as possible, resulting in typically low noise levels. The Lightroom etc. settings I present are, consequently, typical for low-ISO shots taken in good light. Should you be interested in low-light shooting, head for THIS article instead. I don't discuss any kind of HDR, including that of the Note4 camera app, here. Please read THIS article for HDR tips and tricks.
If you've read my previous posts / articles on the camera of the Note4, you know very well the stock Camera app is not capable of very good results because it applies unnecessary noise reduction and sharpening, practically destroying the image quality. Up to now, I've recommended Snap camera HDR (“Snap” for short; PlayStore link; please see my original low-light article for more info on obtaining the latest beta) as an all-in-one app for shooting both video (including 4K) and stills. It may not have the best GUI (in this regard, the FV-5 apps (Camera/Cinema) are far superior) and lack essential features like exposure bracketing (see my HDR article linked to above) but it's the only app that can produce images making full use of the hardware capabilities. For example, it's the only app to be able to go under the shutter speed of 1/15s I know of (please see my above-linked low-light article for more info on this very subject).
If you really want to achieve the best image quality, you'll, as you'll see below, do a little bit of additional work. This is what this entire article is all about: a very detailed one on color noise reduction (CNR for short) and sharpness increasing during post processing in
- Lightroom on the desktop
- Neat Image on the desktop
- Topaz DeNoise on the desktop
- Lightroom Mobile on Android (the iOS works in exactly the same way)
- Photo Mate R2 on Android
compared this to shooting with Snap camera HDR using its built-in CNR and sharpening support.
1.1 Recommended reading before reading on
If you don't know much of the theory of photography please read THIS and THIS for more info on image noise and sharpening, respectively.
Note that the former link takes you to Part I of the article series; the second one is HERE and is a hugely recommended read because, among other things, it clearly explains the differences between luminance and color noise. It's the latter of these that I'm specifically discussing in this article, the former being not as unnatural.
The article on sharpening provides several examples of oversharpening artifacts. It's these artifacts that - along with color noise - we'll try to minimize while keeping our shots sufficiently sharp.
1.2 The goal - why do you want to read this article at all?
To produce as good images as possible. Regretfully, the stock Camera app coming with the Note4 applies far too much CNR and oversharpening even when shooting in broad daylight, at base ISO. In the comparative examples below, I show you several crops that do show this in practice.
1.3 Three ways of shooting
There are three ways of shooting. Below, I introduce them in decreasing complexity (need for additional work) and, regretfully, strictly in this order decreasing achievable image quality too.
1.3.1 Using a camera app producing as little-processed images as possible and (possibly) using desktop apps to make these images more natural-looking
First and foremost, if you don't want to lose any bit of (later) achievable image quality, you must save your images with as little processing as possible. This is exactly what is done when using Snap camera HDR with the non-default settings ("Samsung camera mode" on, sharpening set to zero and JPEG output quality set to "Best") I recommend.
However, the output won't really be eye-friendly then, even if you shoot in the most optimal conditions, that is, in as much light as possible. If you do have the time for desktop (x86) post processing, you can achieve significantly better image quality than with Android-only image processing, let it be done straight in the camera app doing the actual shooting or another Android app you use for post processing.
In the following two subsections, I show you several examples of the typical noise reduction and sharpening you can achieve with high-quality desktop tools working on as little-processed input as possible. As you'll see, the results they produce are not only significantly more eye-pleasing than the original, somewhat noisy and definitely soft (RAW-like) output of Snap camera HDR, but also orders of magnitude better than the absolutely messy output of the stock Camera app.
1.3.1.1 Noise in the near-RAW output images
The sensor of the Note4 has relatively small pixels. This, as you are already aware of, results in a low(ish) signal-to-noise ratio, meaning visible color noise even in the best conditions if absolutely no noise reduction is used. (Actually, you'd need significantly larger pixels (full frame, assuming a Bayer filter) and/or special filter (APS-C sensor size paired with Fuji's X-Trans filter array) arrangements to achieve the total lack of visible noise.)
Let me show an example of this. The following crop (cropped from the original image) shows visible color image noise in the near-black window area:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
And yes, this shot was taken in broad daylight at base ISO.
Note that, in section 1.3.1.2.1 below (obviously, in the first, unaltered, almost-RAW crop), you can also spot some color noise in the tree trunk. However, on dark, homogenous surfaces like, in this case, a black window it's far easier to spot color noise – and to fine-tune CNR while trying to (almost) completely get rid of the noise.
For comparison, here are the already CNR'ed (and sharpened, see next section) output crops of the three desktop tools (Lightroom, Neat Image and, finally, the Lightroom + Topaz DeNoise combo) I'll introduce in section 1.3.1.2.1 below:
Lightroom:
Neat Image:
Lightroom + Topaz DeNoise:
1.3.1.2 Lack of sharpness in the near-RAW output images
The output of the sensor, generally, is pretty soft with most cameras (not only with the Note4). This is caused by the not very good lens (or one operating far from its "sweet spot"), the Bayer / X-Trans filter sensor as opposed to Foveon sensors paired with tack-sharp lens. This (relative) softness can, purely in software, somewhat fixed. This is called 'sharpening'. Unfortunately, you can't use arbitrarily high amounts of sharpening, as it'll lead to the appearance of both very ugly oversharpening halos around the contrasty edges and much more pronounced luminance noise.
Let me show you a pair of crops from the same original image and, then, compare it to both a decently-sharpened one (still without major oversharpening artifacts) and, finally, that of the stock Camera app, showing absolutely awful oversharpening halos.
1.3.1.2.1 Trees (oversharpening halos):
The original, non-sharpened image (shot with absolutely zero software sharpening):
(original, full image)
After processing with one of the most widely used desktop apps for image post processing (with the parameters CNR=10, Sharpening=40, everything else being default, incl. LNR=0), Lightroom 5.7:
(original, full image)
and another one from the, for private, non-commercial use (with some not very severe restrictions), free(!!) and multiplatform (Windows, Mac and even Linux) Neat Image, with LNR 0, CNR set to maximum and Sharpness also set to maximum:
(original, full image)
Finally, the output of another excellent desktop noise handler, Topaz DeNoise (5.1.0) with Clean Color set to 50 and all other settings at default after Reset and additional Lightroom Sharpening of 40 (that is, the same as with the Lightroom-only image above):
(original, full image)
And this is how the stock Camera app renders the same:
(original, full image)
In the last image, notice the absolutely hideous "edges" around the tree trunk over the bright wall. (I'll also discuss with the visible disappearing of fine detail on the trunk of the tree.) Also, in all previous images but the very first (the one originally saved by Snap camera HDR), notice the lack of color noise (which is somewhat present in the original Snap output in the dark windows and on the dark brown tree trunk) and the significantly higher sharpness (but still without annoying oversharpening halos).
1.3.1.2.2 Bush (sharpening, smearing):
The original, non-sharpened image as saved by Snap:
(Note that the original, full images are at exactly the same URL as in the previous section. Also, for the next three shots, the processing parameters are also equal to the ones I've already listed above.):
Again, notice how soft this shot is compared to the next images – that's because of the complete lack of any software post-sharpening.
Lightroom (CNR=10, Sharpening=40):
Neat Image:
Topaz DeNoise:
Finally, for comparison, here's the output of the stock Camera app:
The last image is, as with all the other stock Camera app crops, absolutely awful. There is some major detail smearing, the color saturation is heavily reduced and the edges oversharpened. Yes, another example of why in no way recommend the stock Camera app unless you absolutely need to make use of its features.
1.3.2 Post-processing on Android & most known problems of lower-quality CNR algorithms
In the previous section, I've shown you examples of the achievable quality when, with strictly desktop (x86, true Windows / OS X and, in some cases, even Linux) tools, processing near-RAW images shot on the Note4. In this section, I elaborate on doing the post processing right on your Android phone. As you'll see, the results will be substandard compared to the desktop-based results. Nevertheless, they'll still deliver better-quality results than using the in-app CNR and sharpening features of Snap camera HDR.
1.3.2.1 Photo Mate R2
The following are the same crops as above from the well-known, quite expensive, (for Android) professional Android app “Photo Mate R2” (current, tested version: 2.6). The parameters I used (and found the most optimal): CNR=30, Luminance=High quality, Sharpening=75. (Original, full image; screenshot of the settings)
Black window:
Tree:
Bush:
As you can see, while these crops are still orders of magnitude better than those of the stock Camera app and still deliver more eye-pleasing (that is, significantly sharper and definitely less noisy) images than the near-RAW output of Snap, it can't match the output of the desktop tools.
If you do compare these results to those of the three desktop tools introduced in Section 1.3.1, you'll immediately see that CNR, while not being as effective as with them (just compare the color noise in the black window shot!) has resulted in a significant drop in color saturation. Just compare the saturation of the color brown in the bush shot to those of the desktop tools.
1.3.2.1.1 Why can't you just increase Photo Mate R2's CNR to reach the level of cleanness of desktop tools?
Unfortunately, it's not only color saturation that suffers when increasing the CNR level in Photo Mate R2 – as opposed to the three desktop tools.
Simple(r) and/or faster CNR algorithms just smear colors. This was the major reason (and not the further decrease of color saturation) that I simply couldn't further increase the CNR level in Photo Mate R2. Let's take a look at the following crop at CRN=30 (that is, the above (section 1.3.2) one):
and compare it to the CNR=40 case, that is, with slightly increased CNR strength:
Do you notice the difference? Surely you do. The bench's thin, vertical boards look completely unnatural (as if they were discoloured) in the second case, while they don't exhibit similar problems in the first one. In order to avoid this, you absolutely must stay with lower noise reduction levels.
Now, let's compare how the bench is rendered by the top desktop PP tools (incl. the three introduced in Section 1.3.1) at their significantly higher (again, they got rid of the color noise much(!) more effectively) CNR level:
Lightroom:
Neat Image:
(Sharpening = 75)
(Sharpening = max)
Topaz:
1.3.2.2 Lightroom Mobile
Regretfully, the otherwise (for Adobe's Creative Cloud subscribers) free Lightroom Mobile (LRM for short) is absolutely a no-go if you want to do Android-only post-processing.
1.3.2.2.1 Need for a “true” desktop
First and foremost: LRM doesn't do any kind of mobile-side processing, unless you do share your images right from the client (then, however, you can only share a low-res, pretty much useless one). It just communicates back the processing parameters you set and you'll need to use the desktop LR to post process your images based on the parameters you set in the GUI.
For example, the three levels of Detail > Noise Reduction sets the following parameters for further (again, desktop-based) processing:
Low: Luminance: 25/ CNR: 25
Med: 50/25
High: 75/25
(The original setting, that is, the one if you don't set any NR level, is 0/25).
That is, there's absolutely no way to get desktop-level output on mobile, without involving any kind of desktop post processing. This also means that, if you directly access the, on mobile, (seemingly) edited images synchronized back to the cloud in
- either the desktop file system (in its temporary directory, via "Show in Finder/Explorer")
- or via explicitly exporting it using the "Export" button in the bottom left corner of the Library view and setting "Image format" from "JPEG" to "Original" in the "File Setting" group in the export dialog,
all you get is an unprocessed (original) image.
1.3.2.2.2 The built-in “Share” feature
And if you do share on-mobile processed images right from the client, it'll be downsized, no matter what you do. HERE is the output of the LRM-postprocessed and, then, in-app shared image. A crop of the same bench:
See the VAST difference in resolution?
1.3.2.2.3 LRM Summary
All in all, you can forget about LRM right away if you want to stay away from desktop PP. Even the (otherwise, if you can do desktop PP, not recommended)
- denoise / sharpening in Snap and
- CNR in Photo Mate R2
produces waaaay better results because they don't downsize their output, unlike "Share" in LRM.
Also note that, as has been explained above, the CNR setting LRM uses will always be 25, which is definitely an overkill for Note4 base ISO shots. This is why I recommend against using the presets of LRM – you'll, most probably, want to decrease the CNR on the desktop so you'll need to touch the sliders there, making setting NR on the mobile unnecessary. Just manually decrease CNR to (if you shoot at base ISO) around 10 in the desktop LR, it'll produce the best possible compromise. And, again, then, the output will significantly be better than with either Snap or Photo Mate R2's built-in CNR options. (The latter remark also applies to sharpening quality of Snap.)
All in all, you can't expect much from post processing on Android. Desktop tools will always produce significantly better results. Only use these (along with shooter apps already supporting in-app denoising / sharpening) if you really can't use a full computer for image post-processing.
1.3.3 Using a camera app with built-in CNR and sharpening
Assuming you want the fastest possible way of sharing your images with, for the Average Joe, more pleasing “looks” (read: no color noise, sharp), you may want to give a try to the built-in CNR and sharpening support of the camera app you use. Ideally, support for sharpening / CNR should be achievable right in the app you shoot with. This is the classic case of social shooting in, say, pubs, when you want to share your shots right away (as soon as possible) and, consequently, can't wait for editing your images in another Android app on the same phone after shooting, let alone transferring your images to an x86 computer for post processing (and, consequently, later sharing).
I have bad news for you: Snap has definitely bad sharpening and not very good CNR support. (Nevertheless, even these, when used, produce better images than the stock Camera app's complete mess.) Let's start with the latter.
1.3.3.1 CNR in Snap
1.3.3.1.1 Enabling CNR
In Snap, CNR isn't enabled by default. Enabling it needs to be done via ticking in the “Photo > Denoise” checkbox annotated with a rectangle below:
Note that I also annotated the “Sharpness” menu (with an arrow), in which you can configure post-sharpening. (Generally, as you already know, you'll want to completely zero it out, unless you really need to do the sharpening right during your shooting.)
Also note that, in order for the Denoise checkbox to be displayed, you must enable “Others > Show Advanced Settings”, also annotated below:
1.3.3.1.2 And what about the quality?
As I've already hinted on, you can't expect much from Snap's CNR algorithm. The good news, however, is that isn't worse than that of the standalone Photo Mate R2. That is, if sharpening isn't important (and, again, you absolutely must do everything on Android), you can just use Snap's built-in CNR and won't end up having to load the same image to Photo Mate R2 afterwards.
1.3.3.1.2.1 Snap, “Denoise” disabled
A pair of Snap crops of the original image shot without “Denoise” enabled:
Bench:
Black window:
1.3.3.1.2.2 Snap, “Denoise” enabled
And with “Denoise” enabled (original, full image):
Bench:
Black window:
1.3.3.1.2.3 Photo Mate R2, CNR=30, Sharpening=0
Finally, compare the above crops to those of Photo Mate R2 with CNR=30 and without(!) any kind of sharpening in order to provide level playing field for the two apps. Original image; the settings I used.
Again, as has been explained in Section 1.3.2.1.1, you won't want to go over CNR=30 with Photo Mate R2 because of the major smearing effects. In that section, I've shown you sharpened crops. Note that the sharpened black window crop is HERE (screenshot of the settings used)
Bench:
Black window:
1.3.3.1.2.4 Summary
As you can see, unlike Photo Mate R2 with its separate color and luminance NR sliders, Snap applies a sizable amount of luminance NR as well. Consequently, the resulting image is, as you may have already noticed, significantly softer.
After all, luminance NR equals to blurring the image itself, and not “only” the colors on it. Also, luminance noise is far more natural, film-like and, consequently, acceptable. This is why I, generally, don't apply luminance NR to my low-ISO shots. Too bad Snap doesn't allow for separate noise reduction – currently, luminance NR is just too heavy-handed and results in pretty soft results. (Nevertheless, needless to say, these results are still way superior to those of the stock Camera app!)
1.3.3.1.2.5 Color saturation decrease
Note that, as with Photo Mate R2 (and unlike with the three desktop apps when properly configured), the color saturation definitely decreases in Snap's shots. Just compare the intensity (saturation) of the brown of the branches in the following shots, starting with the non-denoised Snap original:
Snap, denoised:
Photo Mate R2, CNR=30, no sharpening:
(note that you can find the output of desktop apps, along with the absolutely awful stock Camera app, in section “1.3.1.2.2 Bush”. Technically, the non-denoised Snap original can also be found in that section; however, for easy comparison without having to scroll much, I've repeated it here.)
1.3.3.2 Sharpening in Snap
As has been mentioned several times, in order to get the best possible results via post processing, you REALLY want to set Photo > Sharpening to zero (screenshot of the whereabouts of the menu item is in section “1.3.3.1.1 Enabling CNR” above). In this section, I scrutinize the sharpening quality of the app. Regretfully, it's pretty bad; no wonder I recommend getting rid of it entirely.
Now, let's take a look at the default (3) settings:
Tree:
Compare this screenshot to those of in section “1.3.1.2.1 Trees (halos)” above. See why I don't recommend using sharpening in Snap at all?
Naturally, the maximum sharpness level, 6, results in even worse output, with even more prominent sharpening halos:
Nevertheless, should you really need on-Android sharpening and want to refrain from using Photo Mate R2, you can still use a sharpness value “1”. It'll correspond to 50% (or even more) sharpening in Photo Mate R2.
2. Tips and tricks for desktop post-processing
Above, we'll seen the relative quality of the three approaches:
1. desktop (section “1.3.1 Using a camera app producing as little-processed images as possible (and possibly using desktop apps to make these images more natural-looking)”)
2. Android with an additional app (section “1.3.2 Post-processing on Android & most known problems of lower-quality CNR algorithms”)
3. not using any kind of post processing but using the built-in NR and/or sharpening of the camera app itself (section “1.3.3 Using a camera app with built-in CNR and sharpening”)
We have seen the achievable quality gradually decreased in the above order.
In this chapter, I provide you with other tips on post processing Note4 images on the desktop; that is, the best way to achieve the best image quality.
Basically, I've found the, for private (non-commercial) use, if you accept the (not very restrictive) limitations, free “Neat Image” and “Topaz DeNoise” somewhat better than Lightroom. Nevertheless, even Lightroom can produce significantly better results than anything on Android, even the expensive Photo Mate R2.
(to be continued!)
(reserved for future updates)
(reserved for future updates 2)
(reserved for future updates 3)
excellent analysis as usual. much appreciated!
Great work. You are legend Menneisyys.
Sweet
As you may have noticed, Microsoft have released version 2 of their absolutely excellent panorama stitcher app, Image Composite Editor (ICE for short) with several new features, including being able to create panoramas out of videos. As I'm a big fan of panoramas and always loved ICE for its speed, accuracy and being free, I've very thoroughly tested the new feature, particularly in order to find out whether it can significantly increase the quality of panoramas one can create with the Samsung Note 4, the, otherwise, best and most versatile high-end phone today.
During my tests, I shot 4K videos in 32-33 seconds for a 360-degree turn (to maximize resolution, in portrait orientation) and, then, processed it with ICE. First, five stitches (three of them with inline crops): four by ICE and one created by the dedicated “Panorama” mode of the stock Camera app:
ISO Auto, stock Camera app:
Flickr
ISO 800 (max.), stock Camera app:
Flickr
ISO Auto, Snap camera HDR, 48 Mbps, 0 Sharpening:
Flickr
ISO 1600 (this has no effect on the end result), Snap camera HDR, 48 Mbps, 0 Sharpening:
Flickr
OOC image:
Flickr
Please check out my writeup HERE for more info on the intricacies of the above shots – what one wants to pay attention to, how to properly assess noise reduction etc.
Note: as with non-sharpened Snap camera HDR shots, the untrained eye may find the results of Snap camera HDR too soft. After all, as I always recommend, I've shot the video with fully disabled sharpening. After, in the built-in “Preview” app of OS X, some sharpening applied (in Preview, maximal) to the above shot, it becomes far more eye-candy:
My remarks:
1. the 4K + ICE combo produces significantly more detailed panos than the OOC panos shot in the dedicated “Panorama” mode of the stock app, particularly if you're on a Snapdragon 805 CPU-based Note4 and use “Snap camera HDR” at its 48 Mbps, 0-sharpening mode for recording.
2, it extracts far (about an order of magnitude) fewer input images for stitching than the number of separate image slices used by the dedicated “Panorama” mode of the stock app. Basically, in general, it uses some 32-34 images for a 360-degree turn (meaning one image each 12 degrees).
This means that, if there are stitching errors because of the parallax error, they will be more far more severe than with the stock app. Some example of them is annotated in the following crop of the above Snap 1600 shot (original):
With the stock app, thanks to the much higher frequency of sensor sampling, such huge errors aren't at all common.
Nevertheless, Samsung's implementation of isn't as fast as, say, that of Apple. Apple's panorama mode uses an even higher sampling frequency, resulting in parallax errors not being present in the target pano almost at all, assuming shooting the panorama just turning around and not paying attention to trying to rotate the phone around its vertical axis to minimize the parallax errors.
(Note that by restricting the panorama area to 2 some 0.3 seconds in the input video (see THIS screenshot), ICE only used two input frames. As these frames were different from the ones extracted from the video for the 360-degree panorama, the resulting stitched images has different parallax error-induced stitching images – in this case, none. See THIS for the resulting (of course, not very wide) pano.)
3, as with still images, videos created by the stock Note4 Camera app are heavily oversharpened after applying some very serious and, in good light, absolutely unnecessary noise reduction. This means that, if you do have a Snapdragon 805 CPU-based Note4, you'll, as with still shots, want to use Snap camera HDR for shooting 4K video instead. (Note that you must use the configuration settings HERE to make it shoot usable 4K footage.)
All in all,
if you cannot use an iPhone (or, if the smaller sensor and subsequent worse noise performance and lower dynamic range isn't a problem, iPad) for shooting sweep panos, you'll want to prefer shooting 4K video with Snap camera HDR and processing the end results using ICE. It may deliver significantly better-quality results than the Panorama mode of the stock Camera app. Nevertheless, as it doesn't sample the sensor very often, you'll really want to minimize parallax error while shooting. The above panos were shot without trying to do so – I just turned around my axis so that, introducing a lot of parallax errors, I could find out how ICE handles them.
One of the best thread on Note4 board I've ever seen. :good:
There is no option for denoise even if I have got the 6.3.0. The advanced parameters do no change in photo mode. I'm running 4.4.4
Help appreciated
akshaypatil869 said:
There is no option for denoise even if I have got the 6.3.0. The advanced parameters do no change in photo mode. I'm running 4.4.4
Help appreciated
Click to expand...
Click to collapse
1. Denoise is definitely here on my KitKat Snap805 as of 6.3.0. Is yours also a Snap805 phone, or an Exynos one?
2. Nevertheless, as I've explained above, I don't really recommend it as its denoising algorithm isn't very good. So, you don't lose much.
Menneisyys said:
1. Denoise is definitely here on my KitKat Snap805 as of 6.3.0. Is yours also a Snap805 phone, or an Exynos one?
2. Nevertheless, as I've explained above, I don't really recommend it as its denoising algorithm isn't very good. So, you don't lose much.
Click to expand...
Click to collapse
Damn, I thought it was Snapdragon. I was under the impression that 910c pointed out to snapdragon. I am disappointed.
akshaypatil869 said:
Damn, I thought it was Snapdragon. I was under the impression that 910c pointed out to snapdragon. I am disappointed.
Click to expand...
Click to collapse
I have bad news - Snap camera HDR will not deliver better IQ for you, then.
Yes, I read about it. My parents had got the phone under my advice from this link.
http://www.xcite.com/phones/mobile-...mp-4g-lte-wi-fi-smartphone-5-7-inch-gold.html
See for yourself how misleading the specs are.
Should I sue them? ;D
akshaypatil869 said:
Yes, I read about it. My parents had got the phone under my advice from this link.
http://www.xcite.com/phones/mobile-...mp-4g-lte-wi-fi-smartphone-5-7-inch-gold.html
See for yourself how misleading the specs are.
Click to expand...
Click to collapse
Yup, they've completely messed it up - "Snapdragon 805" as the CPU and "Octa Core " as "No of Cores".
Should I sue them? ;D
Click to expand...
Click to collapse
Well, if image quality is of enormous importance for you and you've purchased the phone from them, believing it's Snap805-based, because you wanted to make use of the additional image quality offered by near-RAW saving, you could ask them for an exchange into a real 805-based model.
Don't forget to mention Samsung's Lolli update doesn't support RAW export so the only way to have natural photos on the handset is going the Snap HDR way on Snap805-based devices.
Menneisyys said:
Yup, they've completely messed it up - "Snapdragon 805" as the CPU and "Octa Core " as "No of Cores".
Well, if image quality is of enormous importance for you and you've purchased the phone from them, believing it's Snap805-based, because you wanted to make use of the additional image quality offered by near-RAW saving, you could ask them for an exchange into a real 805-based model.
Don't forget to mention Samsung's Lolli update doesn't support RAW export so the only way to have natural photos on the handset is going the Snap HDR way on Snap805-based devices.
Click to expand...
Click to collapse
There is no need to do such things on Exynos variables because it does offer much better image/video quality but if you like to flash kernel stuffs then Qualcomm based is prefered.
Personally, I will pick the Exynos one as Android is mature now I can live with stock kernel and I real won't have time to process every image and video. The only real reason now to get Qualcomm is Exynos still no offer dual band.
You can watch it started from 7:32 ... N9100(S805) vs N910U(Exynos)
TheEndHK said:
There is no need to do such things on Exynos variables because it does offer much better image/video quality
Click to expand...
Click to collapse
I'm afraid you're wrong. While I haven't had the chance of directly comparing the two models' image quality under exactly the same circumstances, in well-controlled comparative tests, all Exynos photos I've seen exhibited exactly the same problems as Snapdragon 805-based ones and were equally as bad.
The only difference between the two models is the ability of Snap camera HDR to access the image before the latter's undergoing noise reduction and sharpening (but after WB). This is why it's capable of exporting almost-RAW images with the right settings (basically, Sharpness at zero).
@Menneisyys again a amazing post, thx for your time, 2 thumbs-up
Menneisyys said:
I'm afraid you're wrong. While I haven't had the chance of directly comparing the two models' image quality under exactly the same circumstances, in well-controlled comparative tests, all Exynos photos I've seen exhibited exactly the same problems as Snapdragon 805-based ones and were equally as bad.
The only difference between the two models is the ability of Snap camera HDR to access the image before the latter's undergoing noise reduction and sharpening (but after WB). This is why it's capable of exporting almost-RAW images with the right settings (basically, Sharpness at zero).
Click to expand...
Click to collapse
I don't think I'm wrong because there are a couple of sites to compare image quality between Exynos & Qualcomm(not must be Note4, but also on Note3 or S5). All the results are same just like in the video link above, Qualcomm got huge noise reduction and hence loss many details and your solution is to avoid the stock camera to do this because Snap Camera HDR can stop the noise reduction and sharpening. Furthermore, I have to point out Exynos usually got better focus speed and accurate rate because the ISP is better than Qualcomm one so it is not only just about the image quality.
I forgot all the links so I can't share the information here but I'll try to find them out later. There is also a thread talk about Note4 image quality in a HK forum and because we can buy Exynos and Qualcomm easily in HK(Samsung does launch all of them here) and a couple of them who tried both model reported Exynos got better quality and focus.
I've to admit your method is even better(nice trying :good in terms of quality because not only hijack the denoise but also included the sharpen processing but it gonna spend some time on each image to do after-processing. Personally, I will pick Exynos for convenience because I always record videos and impossible for me to process all of them(especially 4K), I will need an i7 PC to do that overnight.
I'm already planning to get S6 on April, let's see how capable of the new camera.

Extracting Both Images from P9 Dual Camera

Hi All,
I am trying to test some image analysis applications with the Huawei P9. Is it possible to extract two images (one from each camera) from a single shot? I know one of the cameras has a monochrome lens, and I know how to obtain just the monochrome image, but it would be extremely valuable if I could obtain both images from just one shot.
Looking forward to your assistance,
Josh
I do not want to stop your enthusiasm, but from my tests, they don't exists two images from one shot.
I didn't do my tests with an engineering approach, I only did some empirical test and from these I gather that:
- when you setup the Monochrome mode, the P9 activates the left camera (on the left when facing the phone back)
- with all the other modes, the P9 activates the right camera (the one between the flash and the left camera)
The P9 doesn't create 2 images, than combine them, it just shot always 1. How I came to this conclusion? You can also try it at home:
I choosen few static subject and I made my photos with the phone on tripod, than I did many photoshoots in the normal way and also by covering alternately the 2 cameras with a black scotch tape.
Even by naked eye, even by using an image comparation software (I used Beyond Compare from Scooter Software) I found no difference at all, no more brightness, no more contrast, no better image definition.
I did in a bright environment, in a dark one, I enabled and disabled the PRO mode and I tried to do a testing more complete as I could (honestly, I omitted to test the image in RAW mode, I tested only JPEGs), but my conclusion is that the 2 cameras are doing a different job, but they are definetely NOT working together.
Thanks for testing, but did you also try this outside on a landscape view? Maybe then we will see other results?
Otherwise this is yet ANOTHER thing Huawei lied about.
Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see
ScareIT said:
Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see
Click to expand...
Click to collapse
That would be nice!
Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.
If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).
Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.
Great oTToToTenTanz!
I confirm that! Both cameras are essential to enable the wide aperture effect: when you try to shoot in the bokeh mode it appear an alert to check if the lens is clear, the blurred effect disappears and it's impossible to edit the depth in post-production.
I make 2 hypothesis:
- the phone really combines the 2 pictures in order to recreate the depth (is a strategy used in all the 3D cameras), so in some way there should be the possibility to get both pictures
- the phone uses the laser pointer to shot IR around the subject, then the monochrome camera will get the infrared information (and considering that its lens is without the RGB filter, will be very efficient to do that) and store them in order to obtain an accurate depth (I mean something like this: https://www.youtube.com/watch?v=dgrMVp7fMIE)
Nice things to try!
Additional Info on Depth
oTToToTenTanz said:
Hey guys. I did a quick test shooting in bokeh mode or aperture effect (I guess you know what I mean). If you cover the black and white lense it lets you shoot the picture BUT NOT edit the depth of field once you took the picture.
If you uncover the lense, it works like it is supposed and also stores the depth information (two lenses are crucial to get depth information).
Thus, in order to extract two images from one shot, the best guess is that you try it in bokeh mode. But even then I dont know if its possible. However, the phone definitely uses both lenses that time.
Click to expand...
Click to collapse
Hey oTToToTenTanz,
Really appreciate your (and everyone else's) help on this! Can you give me some more info on how you actually extract the depth info in a usable form e.g. a matrix? Does the image just produce an RGB-D image once saved?
Thanks so much,
Josh
Yes unfortunately I think this is simply a feature that huawei lied about. The phone doesn't actually use both lenses at the same time to produce better quality normal photos; the monochrome lens is only used for bw mode or to obtain depth information for the wife aperture mode. The two lenses are not used in conjunction to provide better low light performance. You can try it yourself as stated earlier in the thread, cover the bw lens with your finger and compare the photos with normal ones: they'll look the same...
As far as I understand it, there are two cases in which both cameras are used.
One is for the wide-aperture ("bokeh") mode, in which a depth map is created from both pictures that have a slightly different perspective. I've read somewhere that the resulting image is a normal JPG file that is way too large, so it seems that there is additional data after the end of the actual JPG image. This would also explain why the capability to adjust depth of field is lost once the file is opened and saved by any application. I'll have a look at such a file when I have some spare time; maybe I'll find out more.
The other case is landscape shots in low light. Several people reported that covering the second camera in this scenario results in much darker images. This seems like a silly limitation, but I believe I understand why it's there. The two images that the cameras take differ in perspective (obviously, due to the fact that the cameras are mounted next to each other), which is quite difficult to adjust for when trying to combine both sensors' data. However, when focusing at infinity, for example when taking landscape shots, the difference in perspective is negligible, so that in this case the two sensors' data can be easily combined to improve low-light performance.
Maybe it would be possible to combine both sensors' output at closer distances in a satisfactory way, but it seems that Huawei chose not to implement that. If I find a way to extract the second sensor's data from a wide-aperture image, I'll poke around a bit to see if it would be possible to combine them.
I did some poking around on my lunch break. I threw a wide-aperture image into JPEGsnoop and it came up with two images in the file (four if you count the thumbnails, as well), the first one being the processed, "bokeh" image, while the second is the original color image without any processing. I assume that this is the image that is used to re-process the wide-aperture image when editing the focus point or aperture through the gallery app.
JPEGsnoop also told me that there's more data after the image segments. Since it couldn't work out what that data is for (this is past the end of the actual JFIF file), I checked it out using a hex editor. I found a marker "edof" (extended depth-of-field?) followed by what looks like some header data, followed lots of repeating bytes. This block is about 1/16 the size of the image in pixels (so 1 byte for each 4x4 pixel block). I'm not sure whether that's a small greyscale version of the image itself or a depth map, but I suspect it's the latter.
So, I'm afraid that it will be impossible to extract the monochrome image sensor data from a wide-aperture image, as it's not there anymore.
PerpulaX said:
I've read somewhere that the resulting image is a normal JPG file that is way too large, so it seems that there is additional data after the end of the actual JPG image. This would also explain why the capability to adjust depth of field is lost once the file is opened and saved by any application. I'll have a look at such a file when I have some spare time; maybe I'll find out more.
Click to expand...
Click to collapse
I confirm that: I did few shots on a single subject (always using tripod);
- the pictures in normal mode and with wide aperture with the BW camera covered results in 2.5 MB weight (max resolution; the photo's Title/Subject/Description is marked as "edh"
- the same subject in wide aperture mode (with the BW camera fully working) results in 5.5 MB weight (more than double); the photo's Title/Subject/Description is marked as "edf"; if this photo is opened with some image editing software, no alpha layers or other visual information appears anywhere; if the photo is saved back, the size will became comparable to the same photo without wide aperture effect
As depth information are not appearing in any editing software, I suppose they are hidden inside the jpeg file with some kind of steganography technique. I tried to examine the file with some ready-to-use tool (like stegdetect, that should be capable to detect if a jpeg file is standard or has something hidden) but I get only some mismatching header error, nothing that can let me understand where and how the depth information are stored and, primarily, if the black and white picture is also stored inside.
The cam seems to be making two Images for every shot. You can for - instance - make a picture and then edit it with the onboard effects. If i make the picture e.g. partially B&W, I can see, that it does use an original B&W picture taken with the original shot. This is not an artificial B&W.
The question is, where it is stored or are the necessary informations only "combined"?
PerpulaX, ScareIT you guys are right,
- the 992x744 depth map is coded on 8 bits at the end of the file, use HxD editor to extract the image (check the tags in ascii code "edof" & "DepthEn" ).
- displayed jpg is the saved one after blur processing on your sd card
- hidden jpeg in exif is the original image shot , without blur processing.
So it explains why you can re-edit your picture anytime on your P9 even after renaming... or simply have fun with the depth map for detouring in photoshop for instance
Made a python script to automate the EDOF and image extraction. It's simple but it works.
https://github.com/jpbarraca/dual-camera-edof
zoubla88 said:
PerpulaX, ScareIT you guys are right,
- the 992x744 depth map is coded on 8 bits at the end of the file, use HxD editor to extract the image (check the tags in ascii code "edof" & "DepthEn" ).
- displayed jpg is the saved one after blur processing on your sd card
- hidden jpeg in exif is the original image shot , without blur processing.
So it explains why you can re-edit your picture anytime on your P9 even after renaming... or simply have fun with the depth map for detouring in photoshop for instance
Click to expand...
Click to collapse
Can you explain what is possible to do in post-process? What can I do with the photo?
You can do exactly the same thing as the Huawei gallery app (at least).
For Photoshop there are plenty of tutorials using Depth Maps with the Lens Blur plugin
ScareIT said:
Yes, I did.
I'm thinking about making a full post about photo comparation. Let's see
Click to expand...
Click to collapse
Waiting for more details and experience sharing from you
Tijauna said:
Yes unfortunately I think this is simply a feature that huawei lied about. The phone doesn't actually use both lenses at the same time to produce better quality normal photos; the monochrome lens is only used for bw mode or to obtain depth information for the wife aperture mode. The two lenses are not used in conjunction to provide better low light performance. You can try it yourself as stated earlier in the thread, cover the bw lens with your finger and compare the photos with normal ones: they'll look the same...
Click to expand...
Click to collapse
Hy!
I think, that P9 does take two pictures and combines them in low light conditions. Here is two example, when something went wrong with the combination of images, and the two images becomes visible: https://goo.gl/photos/cK5q2TEisEU7rmpz9
What do you think?
Abel
So the file size is increased when B&W is uncovered but gives no actual benefit to the picture? Damn it, as useless as Interpolation!

Why is hdr a separate camera mode?

I don't get hdr is a separate mode and just not on by default for taking regular pictures? Wouldn't you want hdr on most of the time?
worldsoutro said:
I don't get hdr is a separate mode and just not on by default for taking regular pictures? Wouldn't you want hdr on most of the time?
Click to expand...
Click to collapse
Because they wanted to appeal to photographers and HDR is a dirty word.
Sent from my CLT-L29 using Tapatalk
Hi worldsoutro,
Photography is all about capturing light. And HDR is just another way of doing it. But it's not main way of taking photos. So, it totally makes sense to have HDR as an option. HDR stands for High Dynamic Range, which in return allows you to combine whites (bright spots) and blacks (shadows) in one image. In order to create such image, the camera has to capture at least three images.
1st - under-exposed (this image will give you very nice and dark shadows).
2nd - correct exposure (normal photo).
3rd - over-exposed (capturing those whites, sunlight, anything bright).
Then software will take all three shots and compose one image. The three images is a bare minimum, and there are methods that use 7 or more images to combine into one.
The biggest downside of HDR is color representation. All colors are going to be in extreme ranges. Also taking HDR photos is probably heavy on the battery, since you are probably capturing more than one image very quickly and processing it (HDR in Huawei might all be simulated via software as well, so it might be just taking one image and processes it to make it look like HDR).
If someone has info about how Huawei has implemented HDR photography, please post! I'm actually curious now.
zed'sded_bb said:
Hi worldsoutro,
Photography is all about capturing light. And HDR is just another way of doing it. But it's not main way of taking photos. So, it totally makes sense to have HDR as an option. HDR stands for High Dynamic Range, which in return allows you to combine whites (bright spots) and blacks (shadows) in one image. In order to create such image, the camera has to capture at least three images.
1st - under-exposed (this image will give you very nice and dark shadows).
2nd - correct exposure (normal photo).
3rd - over-exposed (capturing those whites, sunlight, anything bright).
Then software will take all three shots and compose one image. The three images is a bare minimum, and there are methods that use 7 or more images to combine into one.
The biggest downside of HDR is color representation. All colors are going to be in extreme ranges. Also taking HDR photos is probably heavy on the battery, since you are probably capturing more than one image very quickly and processing it (HDR in Huawei might all be simulated via software as well, so it might be just taking one image and processes it to make it look like HDR).
If someone has info about how Huawei has implemented HDR photography, please post! I'm actually curious now.
Click to expand...
Click to collapse
Your description of combining exposures is correct, but you got the reasons for the different exposures wrong, underexposed is to retain detail in highlights, and overexposed is to retain detail in the shadows.
Sent from my CLT-L29 using Tapatalk
zed'sded_bb said:
Hi worldsoutro,
Photography is all about capturing light. And HDR is just another way of doing it. But it's not main way of taking photos. So, it totally makes sense to have HDR as an option. HDR stands for High Dynamic Range, which in return allows you to combine whites (bright spots) and blacks (shadows) in one image. In order to create such image, the camera has to capture at least three images.
1st - under-exposed (this image will give you very nice and dark shadows).
2nd - correct exposure (normal photo).
3rd - over-exposed (capturing those whites, sunlight, anything bright).
Then software will take all three shots and compose one image. The three images is a bare minimum, and there are methods that use 7 or more images to combine into one.
The biggest downside of HDR is color representation. All colors are going to be in extreme ranges. Also taking HDR photos is probably heavy on the battery, since you are probably capturing more than one image very quickly and processing it (HDR in Huawei might all be simulated via software as well, so it might be just taking one image and processes it to make it look like HDR).
If someone has info about how Huawei has implemented HDR photography, please post! I'm actually curious now.
Click to expand...
Click to collapse
Over-exposure gives usable shadows and under-exposure usable highlights [emoji16]
Sent from my CLT-L29 using Tapatalk
So in a bright sunny day should I always shoot with HDR?
Good catch guys. Yeah, overexposure allows you to get all details in the shaded areas and preserve them. While underexposed photo would exaggerate light sources.
I suppose we are turning this into HDR topic altogether.
worldsoutro - I think you can use HDR whenever you think you will like the result. Photography is art in the end. I would say that during midday hours (when the sun light is the harshest) and at night (with appropriate light) HDR can give you some cool results.
Play around with different modes. Check out Pro mode too. You basically have a full control over the scene. It's pretty cool.
Hope it all was helpful. Cheers!
Auto (photo) mode uses HDR whenever it deems it appropriate - it's those situations where it says "sharpening - hold the device still" (also the same situations where most of the criticisms of excessive sharpening apply).
It's a less elegant implementation of the auto HDR you see in some other phones, and one you can't turn off without switching to pro mode (but then pro mode is very good on the P20 Pro and also allows all its settings to remain on auto, so usually not a big problem making that switch when you need it).
worldsoutro said:
So in a bright sunny day should I always shoot with HDR?
Click to expand...
Click to collapse
It depends on what outcome you have in mind the time you take the photo. I like playing with light and although I like wide dynamic range look, I also like to take photos with high contrast, so auto mode gets played some times and I have to lock the exposure the way I want
I am using dslrs for many many years (always travelling with a backpack full of lenses) but I think this phone's camera is really amazing. In really low light situations you can take way sharper photos than what you would with a dslr when handheld, and that's something.
Sent from my CLT-L29 using Tapatalk

Camera tips and tricks

It seems some camera options are not very well documented. Thought I would start a thread to share tricks to help improve photos. There is another thread for tips and tricks but that one focuses on other things. Since camera is one of the highlights of this phone I figured a dedicated thread was worth it.
Here are a few I found. Feel free to share yours!
1. When tapping to focus on a point, if you do a long touch instead, it will set a focus point but add a second movable frame for exposure so you can have an exposure point that is not your focus point (IE focus on someone but expose for highlights)
2. If short tapping to focus, you can then tap/drag the focus point up or down to adjust exposure level (exposure compensation).
3. From my early tests, it looks like the camera hdr is better at recovering shadows instead of highlights. When having high contrast scene, change exposure so the highlights are better exposed when looking at picture frame. Shadow details will come out better (don't over exaggerate this or shadows will remain too dark). Adjusting exposure for shadows never seem to recover highlights properly.
4. I've seen some reviews where pixel 3 has a better exposure using their night scene function. If the mate night function yeilds results too dark, you can force the time and Iso to use (tap the icons in bottom left and right). So far I found that if I look at the picture info and see the auto mode exposed say 4 sec with Iso 400, usually keeping 4 sec but doubling Iso (800 in that case), will produce a better exposure similar to the pixel. I don't want to get into color/detail comparison between the 2 devices.. This is just to get a better exposure. Guessing they'll sort this out in a future update.
For now that's what I found that didn't feel intuitive.
Please share your findings!
Let me share my suggestions:
1. In case of pro mode ,shutter speed is restricted to 30s of exposure whereas night mode can give up to 52s (max I have seen) exposure.
2. You can try different light painting modes to achieve low-light shots as well. I tried with star trails and got good results ( but exposure gets throttled and(or) locked at some point.
3. in Pro mode, If you are taking low-light snaps in an enclosed area such that your flashlight can reach, then you will get very good photos for reasonably smaller exposure times.
4. Use tripods for all night shots (bluetooth trigger will make it even better), don't rely on stabilization unless there is ample light and that exposure time will be around 1/125 , because even night mode can be affected despite the claim that OIS stabilization will be sufficient.
5. lowering the exposure while taking close-up flash photography will help in partially retaining data that would have been lost due to flash overexposure.
Thanks,
Rakesh

Categories

Resources