This exercise is about creating a composite image from two shots.
Part one: Merging exposures
For the first part I shot to landscape images – one exposed correctly for the sky and one exposed correctly for the ground:
I copied the ground image in a new layer over the sky image in Photoshop. I used the eraser tool to get rid of as much of the over-exposed sky as possible. I did encounter some difficulty here as the middle-distance and far-distance ground really didn’t blend very well. On reflection something with harsher lines between ground and sky would have been significantly easier. The result is far from convincing:
For the next alteration I used the selection tool, and used the “refine edges” options to feather and make overall improvements to the selection. It’s better, but not by much:
It’s difficult to get right when there’s so many variations in shade before getting to the sky. This definitely doesn’t paint me as a Photoshop master. “You can tell from the pixels”, especially if you’ve “seen some shoops” in your time.
For the final merging I used Photomatix exposure fusion, which output far more convincing results as it’s a lot more sympathetic to the variation in tones in the middle and far distance:
Part two: Replacing the sky
For this one I took this sky, from Uluru, Australia and merged it with the ground-exposed image:
I took the selection from the second image I had edited together, and inverted it – cutting out the bottom of the Uluru image. I cut a bit of ground out too. I then darkened the ground so it looked more like it may be a night shot. I also reduced the saturation of the Uluru sky. Here’s the result:
It’s a lot more convincing than my initial two attempts, in all honesty. The darkness was a lot more forgiving of the variation in tones, and cutting additional ground to maker a harsher line helped considerably; that simply wasn’t possible when taking two images where the only difference was exposure. It is, of course, far less representative of “reality” than the other three versions.
Right tool for the right job, basically. I can see the use of selection being useful with harsher lines between horizon/subject and sky or less variations in tones toward the horizon. In other situations Photomatix and Exposure Fusion creates far “truer” results.
I’ve got no fundamental problem with the fusing of exposures in the manner in part one. I really don’t think it deviates from “truth” – and is, as the exercise suggests, just a work-around to the technical limitations of the camera’s sensor.
The changes in part two are obviously not “truth” – and it’s place is in artistic interpretation. So long as there’s no claim that it’s reality (or, I suppose, representing it as reality is an attempt to challenge perceptions) I don’t see it as a moral problem.