{
I haven’t seen it get mentioned on Tumblr yet but Samsung has just gotten caught in a hilarious scam: they apparently advertised how their latest phones can take great picture of the moon, compared to other phones!
But it turns out they aren’t using machine-learning to clear up blurry pictures, they’re using machine-learning to detect when you’re taking a picture of the moon, and then they swap in a saved PNG file of the moon.
A reddit user figured this out by taking a photo of the moon from Wikipedia, blurring it in photoshop, then trying to photograph it from across the room with all the lights turned out.
Their Samsung phone somehow managed to “clear up” the blurriness and recover details that weren’t there in the first place. Because it’s just cheating.
Here’s the first reddit post in the saga.
https://www.reddit.com/r/Android/comments/11nzrb0/samsung_space_zoom_moon_shots_are_fake_and_here/
Next step obviously is for someone to disassemble the camera app, find the PNG, and replace it with something else.
The problem with art is that the more “advanced” you get, technologically speaking, the less impressive everyday people find it. Most people have tried to draw a face on a piece of paper with a pencil and sucked at it, so when they see a face drawn on paper with pencil they’re like “WOAH! That’s so impressive!” But most people have not had to use ZBrush or Substance Designer or Maya, so when they see something that is frankly incredible in those programs they have no clue just how great it is and have a middling, generally uninterested reaction. I like to call this the Photorealistic Pencil Drawing of Morgan Freeman effect
To demonstrate what actual difficult professional-level art looks like, here’s a material that my program director showed us during our orientation. It is a pile of dead fish made by Blizzard environment artist Eric Wiley (source)
On first glance, knowing nothing about 3D surfacing, you might not know what’s so impressive about this. It’s just a bunch of models of fish, right?
Wrong.
That first image represents a single, flat polygonal surface with a material applied. Those fish are not modeled. They are not being processed by the engine as polygons. It is not visual trickery, where if you looked at it from a different angle the illusion would fade. It is a texture that can be applied to any model, and that model would then look like it was covered in 3D fish.
How does that work?
So a single material is made up of a whole bunch of different maps. Base colour/albedo/diffuse, roughness, metallic, specular, ambient occlusion, etc etc etc. But the important parts for this incredible 3D effect are normal maps and height maps.
Both of these are types of bump maps, which means they modify how the surface texture appears to the camera in order to create details on the surface. Here’s a height map and a normal map for the same material:
Height maps tell the renderer how far in or out the area of the texture should appear, and then the computer calculates, based on this information and the camera’s position, how squished or stretched that part of the texture should be. A height map is what makes the fish appear to occlude the fish sitting behind them, because the renderer knows that a light pixel closer to the camera should be more stretched out than a dark pixel farther away from the camera.
On the other hand, normal maps tell the renderer what angle each part of the texture is at. The lighting is then calculated with this information in mind. For a flat surface, that normal map would just be a flat colour- but you could make the same flat surface appear curved by telling the renderer that the right side is facing more towards the right and the left side is facing more towards the left. You can also use this to create indents that appear truly indented, or seemingly 3D details like rivets or panels, without having to actually model these things and increase your poly count.
What’s truly impressive is that all of this was created in Substance Designer, a truly impenetrable node-based program that even a lot of professionals in the field aren’t very comfortable with or don’t use at all in their workflow. Node-based means that none of these maps were painted- they were constructed out of basic shapes and parameters, all individual nodes connected to create a final product.
This is what that texture actually looks like:
That is hundreds of nodes doing things like “blur” and “blend” connected together to create the actual body of the fish. And that’s only the body. It continues off to the right, where there are even more nodes to convert these shapes into albedo, normals, height, occlusion, etc etc etc.
I wouldn’t even know how to begin creating this. The level of skill this requires is off the fucking charts. And yet if they saw this in a video game, the average player would not think twice about it. Depending on the type of game you play you’ve probably personally seen dozens if not hundreds of materials as complex as this in video games and not thought twice about them. This is what I mean.