{

MGDA Shorts #1: Rossa’s Flashback

The scene starts with a young Rossa sitting on her bed. This is a memory of her past and she has forgotten how old she was. Today beat up a boy in her class who was bullying a kid younger than them. She was too upset to eat dinner and went straight to her room. Her dad comes in after giving Rossa some time to calm down. “Are you okay my little tomato?” He asked gently as Rossa stares out of her room window into the street, she grunts a little and shakes her head. “Haha how about your cool daddy-o give you some advice?” He says as he pats her daughters head “Rossa it’s okay to be angry, even I get angry too. You did the right thing protecting the kid but you have to remember that it is more important to protect those who need it rather than focusing on hurting those who hurt others. Know who to be angry against and use your anger for a good cause, Pa doesn’t want you to go down the wrong path.” Rossa hesitates a little until she finally asks “How do I know if I’m doing what’s right or wrong?” Her dad gives his usual wide grin and puts her head again “It is not easy, sometimes the answer is not so clear. But, focus on protecting who you care about and work on helping those who need it. Love is always the answer my sweet little tomato. Now come and have your dinner, I made a new tomato dish for you to try.” “But Pa, your tomato and eggs are disgusting, you !” He chuckles and gives her a reassuring thumbs up “I know, I tried something new and I’m sure you will like it. Besides you have to be a good role model to your baby sister.” And the scene ends.

processintegrated:

catcrumb:

image
image

starjasmines:

image

crying

ytpmvinthemorning:

image
image
the-hydroxian-artblog:
“ imagine next game they just start talking Zoomer with no warning or explanation
”

sovietnam:

image

depsidase:

image

foone:

foone:

I haven’t seen it get mentioned on Tumblr yet but Samsung has just gotten caught in a hilarious scam: they apparently advertised how their latest phones can take great picture of the moon, compared to other phones!

But it turns out they aren’t using machine-learning to clear up blurry pictures, they’re using machine-learning to detect when you’re taking a picture of the moon, and then they swap in a saved PNG file of the moon.

A reddit user figured this out by taking a photo of the moon from Wikipedia, blurring it in photoshop, then trying to photograph it from across the room with all the lights turned out.

Their Samsung phone somehow managed to “clear up” the blurriness and recover details that weren’t there in the first place. Because it’s just cheating.

Here’s the first reddit post in the saga.

https://www.reddit.com/r/Android/comments/11nzrb0/samsung_space_zoom_moon_shots_are_fake_and_here/


Next step obviously is for someone to disassemble the camera app, find the PNG, and replace it with something else.

aegonsbrittlecrown:

mithrilbookofmystery:

medusasstory:

A strong start. 

image

Reminds me of this I found a couple days ago

image
image

townofcrosshollow:

townofcrosshollow:

The problem with art is that the more “advanced” you get, technologically speaking, the less impressive everyday people find it. Most people have tried to draw a face on a piece of paper with a pencil and sucked at it, so when they see a face drawn on paper with pencil they’re like “WOAH! That’s so impressive!” But most people have not had to use ZBrush or Substance Designer or Maya, so when they see something that is frankly incredible in those programs they have no clue just how great it is and have a middling, generally uninterested reaction. I like to call this the Photorealistic Pencil Drawing of Morgan Freeman effect

To demonstrate what actual difficult professional-level art looks like, here’s a material that my program director showed us during our orientation. It is a pile of dead fish made by Blizzard environment artist Eric Wiley (source)

image
image

On first glance, knowing nothing about 3D surfacing, you might not know what’s so impressive about this. It’s just a bunch of models of fish, right?

Wrong.

That first image represents a single, flat polygonal surface with a material applied. Those fish are not modeled. They are not being processed by the engine as polygons. It is not visual trickery, where if you looked at it from a different angle the illusion would fade. It is a texture that can be applied to any model, and that model would then look like it was covered in 3D fish.

How does that work?

So a single material is made up of a whole bunch of different maps. Base colour/albedo/diffuse, roughness, metallic, specular, ambient occlusion, etc etc etc. But the important parts for this incredible 3D effect are normal maps and height maps.

Both of these are types of bump maps, which means they modify how the surface texture appears to the camera in order to create details on the surface. Here’s a height map and a normal map for the same material:

image

Height maps tell the renderer how far in or out the area of the texture should appear, and then the computer calculates, based on this information and the camera’s position, how squished or stretched that part of the texture should be. A height map is what makes the fish appear to occlude the fish sitting behind them, because the renderer knows that a light pixel closer to the camera should be more stretched out than a dark pixel farther away from the camera.

On the other hand, normal maps tell the renderer what angle each part of the texture is at. The lighting is then calculated with this information in mind. For a flat surface, that normal map would just be a flat colour- but you could make the same flat surface appear curved by telling the renderer that the right side is facing more towards the right and the left side is facing more towards the left. You can also use this to create indents that appear truly indented, or seemingly 3D details like rivets or panels, without having to actually model these things and increase your poly count.

What’s truly impressive is that all of this was created in Substance Designer, a truly impenetrable node-based program that even a lot of professionals in the field aren’t very comfortable with or don’t use at all in their workflow. Node-based means that none of these maps were painted- they were constructed out of basic shapes and parameters, all individual nodes connected to create a final product.

This is what that texture actually looks like:

image

That is hundreds of nodes doing things like “blur” and “blend” connected together to create the actual body of the fish. And that’s only the body. It continues off to the right, where there are even more nodes to convert these shapes into albedo, normals, height, occlusion, etc etc etc.

I wouldn’t even know how to begin creating this. The level of skill this requires is off the fucking charts. And yet if they saw this in a video game, the average player would not think twice about it. Depending on the type of game you play you’ve probably personally seen dozens if not hundreds of materials as complex as this in video games and not thought twice about them. This is what I mean.

✂ Theme by Faluvtha