An Apocalyptic City in Blender: Lazy Speedrun

I was watching another lazy tutorial and had the impulse to try it out for myself. So here is a speedrun of an apocalyptic city. One basic building multiplied many times. No need for elaborate post-processing stacks or piles of artfully arranged rubble, this is the MVP (minimum viable product) shipped to you in 20 minutes (or less, in the case of this slightly sped up video…)

I think that me making these sort of videos is the present-day equivalent of trying to embark on digressions when I have an exam or important project to complete; instead I suddenly get all sorts of ideas to do ridiculous things like make more screencasts of myself doing something in Blender.

For years I’ve watched countless online tutorials on YouTube, many of which were set to some generic vaguely-inspirational electronic music. (I confess that I have playlists full things like youtube tutorial classic Tobu’s Candyland and other NCS Releases) and I took great joy in choosing royalty-free background sounds for this.

People, the keyword for this type of tutorial background music is “CORPORATE TECHNOLOGY”. Don’t go for CINEMATIC or DRAMATIC or INSPIRATIONAL, as there is a chance it might end up too teenage-over-the-top self-aggrandising. As it turns out “CORPORATE” plus “TECHNOLOGY” usually results in something blandly aspirational and futuristic.

A Shopfront in Blender and Unity: Lazy Speedrun using UV Project from View

After encountering Ian Hubert’s World Building video (how did I not see this until now?) I had an epiphany about a different way of making in Blender, besides photogrammatising things or modelling everything up from scratch. For many years I had actively avoided trying to understand UV mapping because I considered it too time consuming, and like he mentions, it is this 3d person whose face is the stuff of nightmares:

HA I have surely attempted to create and unwrap a face like this countless times only to horribly botch it and create something unintentionally horrific (and horrific but in not even an interesting way).

Every time this happened, I had simply accepted this to mean that I was not likely to make it as a character designer or a UV mapping specialist in this life… I mean, you gotta pick your battles. But everytime I saw this map it was like a symbol of all the UV mapping I would never learn to do because I AIN’T GOT THE TIME TO DO IT…

So the UV project from view is an absolute game changer. I actually used the UV project from view to make some pieces of bread previously (for the titular BREAD in the BREAD AND BUTTER game i am working on), but I hadn’t connected the dots to the possibilities… till I saw this…

As a trial run, I did a screen recording of myself doing a speed run making a shop front in Blender and importing it into Unity which took 14 minutes in real time (including a lot of hemming and hawing and undoing). In fact, the editing of the video you see here in iMovie took way longer at 40 minutes (according to Rescuetime), including video exporting and uploading time.

The image I am using is a picture of Hiep Phat from Walworth Road Yes I know it is not even in Stoke Newington, but just another shop found via the keyword “Church Street Stoke Newington”. Sometimes you just need a little hook to get you started. The image can be found on Flickr from the user Emily Webber and it is shared on a CC BY-NC-SA 2.0 licence.

Ironically yes, I have titled this as a SPEED RUN using a LAZY technique because the point is that I ain’t got time to do it the complicated unwrapping way! I’m not sorry that I didn’t even unwrap the ground (pavement in front of shop) totally because even without the ground properly unwrapped it kinda passes muster!

The resulting shop front is very acceptable and definitely usable as a game asset that you might see glancingly from a distance.

Bread and Butter in a Field of Dreams (Coming July 2021)

This July, I’ll be releasing a Free-to-play interactive experience titled “Bread & Butter In a Field of Dreams” for Mac/Win Desktop. But actually, you could say that this project originated as a project under a different name – “The Legend of Debbie“…


Do you want to get a reminder when
“Bread & Butter in a Field of Dreams”
is released for download,
or to hear first about
Debbie’s upcoming projects?
Join Debbie’s newsletter for all DBBD updates!

 

The Legend of Debbie” was originally made as a commission for Asian Film Archive’s State of Motion in January 2021 and it was my way of trying to use the archive of my own artwork as the source material for a sprawling game, exploring the different works as strange portals transporting you to weird spatialised versions of the works, and splicing my works with a partially fictionalised narrative (approximately 25% fiction, 75% reality).

The titular “legend” for the work was this directory which categorised my works into many different categories. A map legend. When I had time I was going to put more symbols all over the place, maybe have a little radar map overhead as well. I also had a lot of fun designing different rooms to represent different works.

I originally wanted to design a LIVE VR experience for the “The Legend of Debbie” and rather than to release the game (because this would take so much more testing for the development side of the project rather than running it as a moderated tour), I would run it as a live event (workshop) where participants could come down in different timeslots to experience this VR game (facilitated by myself)….

Imagine how fun it would be rolling through these odd spaces…

But then the Phase 2 Heightened Measures kicked in again, so we couldn’t have live events like this anymore. So… I did not make a VR version for “The Legend of Debbie”. And in any case, there was something that disturbed me about the final presentation of Legend.

IT JUST WASN’T FICTIONAL ENOUGH!!!

I have come to the conclusion that there is no room for nuance. Or maybe I am not very good at nuance (it is something I am working on, but I suspect that nuance does not come easily to me mainly because my real life personality is too excitable and shouty and maybe a bit childlike and overly earnest at heart).

Instead of developing The Legend further, I somehow ended up making a completely new game from scratch. One in which very deliberately NONE of the works were shown in the game world in their original form, besides the first room which replicates the Wikicliki exhibition by the Singapore Art Museum, currently in the Ngee Ann Kongsi Concourse Gallery (Basement) of National Gallery Singapore. The show runs until 11 July 2021.

Since we couldn’t be in the gallery itself for the talk, I had re-created the gallery for a talk on 29th May (A conversation between myself and curator Mustafa, whom I have worked closely with during the last few months.) Instead of boring slides, based on the items that Mustafa was interested in discussing about, I brought them into the gallery space through the various 3D modelled props on a table, including a few laptops handily scrolling through my actual Wikicliki and a spreadsheet of the Here the River Lies cards (many credits to George for painstakingly digitizing them).

From this totally realistic representation of a real exhibition you eventually get teleported to another world where there are lots of objects which are directly representative of the projects I’ve worked on over the last 10 years, but nothing is represented in the original form that it was made.

In the world of the Field of Dreams, every single artwork I have made in the last 10 years is turned into a transmogrified version of itself – a pop translation of the work which could comfortably exist within a commercially lucrative museum retail shop (a la MOMA shop or NAIISE or any one of those shiny design shops)… or in a dusty basement reading room within an alternative community-based establishment for which there is no lack of heart but financial viability is always a question (such as The Substation’s Random Room).

Somehow making art is an act of translation for me. I don’t really seem to start by drawing or sketch, but by writing, and then I have to translate that into sketches, and from sketches into whatever digital medium I am doing. And this act of translation seems so arbitrary at times. Many ideas could have turned out differently had I chosen to make them in a different medium. Perhaps this expresses the tension I feel between making work as an artist and work as a designer/design educator (which earns me my living). The art can be poetic and ruminative and open-ended whereas the design has to fulfill the brief requirements and ultimately has to be functional (and most likely measurable).

So I thought that instead of a Space Geode rendered all in white, I would have a mildly tacky Space Geode Citrus Squeezer; instead of The Library of Pulau Saigon, its various components would be turned into functional items such as a Tic-tac-toe-set featuring the Chinese Spoon as the naughts and the Political Party Badge as the zeroes (something with the potential to be a slightly tacky coffee table centerpiece). My pulsed laser holography work, “War Fronts” would be rendered instead as a Jigsaw set. And instead of my print of 100 of my dreams from my Dream Syntax book, I turned it into a Scratch-off-chart of the 100 dreams. Because scratch off maps are all the rage now on everyone’s internet shopping list, aren’t they?

Along the way I er…. got a bit too excited because who needs to write a book when you can just make the cover for the book? I was churning out dozens and dozens of pdf book cover textures to populate the DBBD SHOP.

So, perhaps we can’t quite call this work “The Legend of Debbie 2.0” anymore. Maybe this should be called by the name that seems more appropriate for it now: Bread & Butter in The Field of Dreams.

The work takes its name from a 2013 ACES study by the NAC – apparently the first survey of its kind done on arts and cultural workers to examine how on earth do they make their living. I do not know which unnamed arts/cultural worker would give the survey such an evocative name, but here I have made the breads and butters literal, to be collected up before you can gain entry to the next scene.

Special mention also goes to another big survey I participated in not too long ago, which asked artists some very sobering questions about what we thought had advanced our artistic careers or had inhibited our careers, with a dropdown list of items that could potentially limit our careers being twice as long as the advancing list. (In an earlier iteration of the study, it suggested that we dig up our past 10 years of tax returns to examine the difference between our art-income and non-art income. Me, I almost thought this was like some cruel form of “formative assessment” – “Alright, you got me, I’ve NOT been solely living off my earnings as an artist, and in fact, at times this whole “art” thing is frequently a complete loss leader operation!”) I have many ambivalent feels about this. One one hand, my desire to make art isn’t about the money, but on the other hand I also do want to fix the current state of affairs…

There’s a maze and some other weird shizz coming up…

The world is still very much a work-in-progress and I look forward to fleshing it over for July’s “workshop” and to be able to release it as a free game for download! My goal is a release by July 2021! Me thinks I might even do it as a gameplay video – I quite enjoyed this live stream (ticketed as a workshop, but really more like a twitch stream with me having set up OBS and all the ridiculous animated overlays and chats)

I also did another detailed breakdown of the time I spent on this last week using Rescuetime. Rescuetime tracks the time I spend in each app and it is handy in that it breaks down the time I spend into working hours (defined as 9am-6am) and non-working hours (6am-9am) so I can sift out the time I spend on personal projects versus time on my day job. My secret to ekeing out the time is usually to work for 1-2 hrs after Beano sleeps at night and wake at about 4-5am to work.

It goes to show that despite working full time and having a time-consuming baby bean (with help of dingparents dutifully caring for her whilst I work), it is still possible to eke out the time to maintain an active artistic practice if one has the will to do so (and the disclipline to wake up early).

It does feel like a culmination of 3D skills I have taken years to acquire:
2014: when I realised how non-existent my 3D design skills were
2016: when I made myself try to make one blender render a day
2017: intentionally producing new works using 3D
2019: intentionally producing new works in Unity (very basic at that stage)
2020: taking the Unity Developer cert at my workplace, supervising more Unity-based projects
2021: being able to build things like this in a week (on top of a seperate full-time job)

I’ve seen several GDC talks and devlog videos on youtube detailing how every successful game dev probably has dozens of “failed games” before they finally make the one game they are happy with, that one breakthrough game. Likewise I don’t expect Field of Dreams to be perfect on its July 2021 release but I hope to collect lots and lots of feedback after releasing it so I can improve the experience!


Do you want to get a reminder when
“Bread & Butter in a Field of Dreams”
is released for download,
or to hear first about
Debbie’s upcoming projects?
Join Debbie’s newsletter for all DBBD updates!

 

Quick things to do in Blender: Video Editing, Bake Sound to F-curves, VR/3D Storyboarding, Compositing 3D model into photo, and Motion Tracking

I’ve used (and taught) Blender quite intensively for a couple years now but I haven’t really mined all its possibilities yet, and even today when I watch different staff and students work in it I still pick up new things from time to time. My selection criteria for these features is that: YES you could conceivably do all of these, even with just 5 minutes to spare and you are perched on the edge of the bed with your laptop and mouse, half-expecting baby to wake up at any moment…

Things you can do rather quickly in Blender:
Simple Video Editing
Bake Sound to F-curves
Simple Compositing of 3D model into photo
3D/360 Illustration draft with Grease Pencil
Motion Tracking to composite a 3D model into a Video

1. Edit a simple video with simple edits, cropping, overlays, audio, etc in very little time

Earlier in March I attended a training but because I’m a completer finisher sort of person I ended up doing the training material on two programs simultaneously just to see if Blender could do everything Premiere could do. It turns out that YES you can do video editing in Blender and it is even faster and simpler to do it than in Premiere! The interface looks very very similar to Premiere actually and if you go into the Video Editing view there is really no excuse for having any UI related issues because the interface is just so easy now!

Features you might need like text overlay, image overlay, offset, cropping, transitions, fade in fade out, panning audio, compositing, motion tracking – all of them are possible in Blender! I think might use this for my next video edit.

2. Bake Sound to F-curves

F-curve refers to the curve of the interpolation between two animated properties or keyframes. Interpolation has modes (linear, constant, bezier) and it has easing equations (linear, quadratic, cubic, exponential, etc), and stuff like that. But the funny feature in blender is the ability to bake a sound as the F-curve, or to make the sound wave the F-curve, such that your animation pulses along with the audio wave.

3. Do a sketch/storyboard for VR/360 or 3D illustration with the grease pencil

Personally I don’t use this quite enough but the grease pencil is super handy for making some rough sketches or even storyboard before you do an illustration work. For example, I saw a interesting video in which someone used the grease pencil to good effect to do a storyboard for a 360 work here:

You create an empty grease pencil mesh (Shift-A) and then go into the “Draw” mode. You can only draw on the flat image plane that you are facing. But after you draw it, you can move, rotate, and scale the grease pencil drawing at will and move it all around the scene. Many possibilities!

4. Composite a 3D model into a Photo (the simple way)

Somehow it got even easier and easier. You can set the world image background’s vector texture coordinate to “window” and when you look at the Render viewport your object is now in the world with all the colour and light from the background image itself. Works if you only had 5 minutes before the baby was about to wake up and you wanted something super simple. 😀

5. Motion Tracking to composite a 3D model into a Video

I decided to sit down and spend a few minutes trying out camera tracking which I’ve always known was a feature. Can you do it in a few minutes? Well yes, in the sense that Blender can do most of the legwork for you with camera solve but you’ll need to spend some quality time editing the tracks for best effect (especially for correct rotation). Above is an example of a terrible solve…. but it kinda works!

The Kappa’s Izakaya: 360° Illustration Process

Recently I worked on a 360° illustration of an Izakaya in Daryl Qilin Yam’s Kappa Quartet and I was asked if I could share a bit more about the process of doing such an illustration.

Artistic disclaimer: It just so happened that I watched a lot of Midnight Diner at the time when I was doing this illustration, so those spaces were definitely in my mind’s eye. There was also the show Samurai Gourmet which was a bit tiresome to watch, but had a few good shots of a traditional izakaya too. Alas, although I have visited Tokyo several times before, at this point I haven’t really been to a bar or izakaya in some years now…


From “Midnight Diner: Tokyo Stories”


From “Samurai Gourmet”

Some things I realised from these portraits of izakayas is that when in doubt on how to fill the bar space, you can put stacks of tiny crockery or cover it up with a cupboard!


I even made a little crackleware… not that the detail is visible in the final render

Another disclaimer: Where 3D modelling is concerned, I mainly design spaces and architectural/interior renders and I’m not a character designer! This will probably be apparent to people who actually do proper animation / character design because here I chose to render all the other people in the scene in this weird white low-poly form. Personally I thought it a good fix for my weakness and also that it kinda worked for this ‘horror’ piece…

Initially I thought that I would actually try to do the entire drawing by hand because I have actually enjoyed doing similar illustrations entirely by hand in the past – especially with lens distorsion like this:


2 illustrations from the set of 4 that I was commissioned to do for the Commuting Reader

I usually work out a lot of the alignment for this kind of illustration by doing a 3d model using a fisheye or panoramic lens. After arranging white blocks in the space and rendering it out, I just use those lines in the render as perspective reference for my drawing.


Example: this plain equirectangular render with no materials…

And for all other details that you need to fill in by hand, you can rely on an equirectangular grid (here is a link to an equirectangular grid by David Swart that you can use as a template) and think of it as a 4 point perspective drawing as so:

Here’s a 4 hour sketch I made using the grid for the fun of it in 2018…
(Back when I had a lot of free time huh)

Problem right now is that feeding and caring for Beano made it extraordinarily difficult for me to be able to use the tablet or cintiq. If left to her own devices, she wants to pull on all the type-c cables and gnaw on my stylus and then slap my cintiq screen! Attempts to set up my workstation in the bedroom so I can use the cintiq when she’s asleep have failed in baby safety. In fact I’ve more or less resigned myself to the fact that spending time with the tablet is impossible now – WHO WANTS TO BUY A MEDIUM WACOM AND/OR A CINTIQ PRO IN EXCELLENT CONDITION??? – and I’ve had to streamline my time spent designing, thinking of the fastest way to get the visual output. Hours spent doing digital painting like in the old days? Not happening anymore. A blender render is all I can muster now, which is great because whilst I feed and entertain Beano, I can easily set a render going so that I feel as if my illustration is partly doing itself whilst I’m busy…

I also use a renderfarm to speed things up a bit and I usually do a smaller resolution render to check that things are alright before doing the full size. At the 50% of the resolution I wanted it cost about 40-60 US cents (0.85 SGD) for each one. For the final render at 100% resolution and twice the samples, it cost about 4 USD (5.60 SGD).

I don’t know how most people do the next step but I usually go through a process of annotating my renders and then ticking them off in Monosnap through as I do the edits:

Finally we end up with the base render onto which I can add faces and other details in Photoshop. I do find that adding a bit of noise (0.5%-2%) also helps make it more ‘painterly’ because when the render is too sharp it becomes a bit disconcerting and unreal. I also drop the file periodically into this equirectangular viewer to see if the work is shaping up correctly – usually common issues might include (1) some things in the image that seemed further away may suddenly seem extremely close to the camera view or (2) items may be blocked when you render the specific view – so some time needs to be spent finetuning the arrangement.


Render Breakdown

This was another work made possible by the Dingparents who came down to take care of Beano on the weekends so I could continue my artistic pursuits! I am grateful to have the time to continue to make my work.


Come see the final image at Textures: A Weekend of Words, at Sorta Scary Singapore Stories by Tusitala.

13 – 22 Mar
10am – 10pm
The Arts House

Textures: A Weekend of Words celebrates Singapore literature and its diverse community. No longer a solitary experience, reading becomes a shared adventure through performances, installations, and workshops that will take you on a trip through the literary worlds of local authors.

The third edition of the festival takes on the theme “These Storied Walls”. Inspired by The Arts House’s many identities as a Scotsman’s planned estate, our nation’s first parliament, and now Singapore’s literary arts centre, the walls of The Arts House have been etched with the stories of those who have walked these halls.

This year’s programming features more installations and participatory activities that invite you to go a step further — move a bit closer and look a little longer. As you discover undiscovered narratives of your own, join those who have come before and weave your story into the tapestry of The Arts House.

Textures is co-commissioned by The Arts House and #BuySingLit, and supported by National Arts Council

More about Sorta Scary Singapore Stories

Blender & Unity: Manually Rigging Blender Humanoid Characters for use with Unity Mecanim


I’m definitely no character animator by trade, but there comes a time when you end up with a Unity project that somehow requires it. There are obviously many automatic rigging methods available (Blender does actually have an auto-Rigging system called Rigify for biped humanoids) and you could even try to download other rigs made by other people and plonk them into your scene, but I found that so many of the rigs including the rigify one seems to involve so many complicated bones you don’t need, so you end up having to sift through the bones, deleting so many unwanted bones, renaming bones, perhaps even having the impression of the impossibility of rigging up them bones.

Although it may seem terrifying at the beginning (I’m not an animator or rigging specialist!), I found that surprisingly, it is not that difficult to manually rig up all your bones if what you have is a very simple humanoid character. You just need to be orderly and to stick with the admittedly tedious bone naming process. (Although our character is blobby, we’re sticking with a humanoid as we’re going to use it with the Kinect to sync it with the movement of the human user, and our human user is going to return a humanoid set of values that we’ll need to rig up our character to…)

According to the Unity Blog’s post on Mecanim Humanoid:

“The skeleton rig must respect a standard hierarchy to be compatible with our Humanoid Rig. The skeleton may have any number of in-between bones between humanoid bones, but it must respect the following pattern:”
Hips – Upper Leg – Lower Leg – Foot – Toes
Hips – Spine – Chest – Neck – Head
Chest – Shoulder – Arm – Forearm – Hand
Hand – Proximal – Intermediate – Distal

This is the list of all the bones you need (I found it useful to copy and paste in these names directly)

head
neck
collarbone.L
collarbone.R
upperArm.L
upperArm.R
lowerArm.L
lowerArm.R
hand.L
hand.R
chest
abdomen
hips
upperLeg.L
upperLeg.R
lowerLeg.L
lowerLeg.R
foot.L
foot.R
toes.L
toes.R

Optional: eye.L and eye.R

For starters: Ensure that your character model is positioned at origin and that its pivot point is also at origin (0,0,0). Make sure you reset the scale to 1 just in case (Ctrl+A, Select Scale). The hip bone is the key bone in all this, so start by creating one big bone starting from the bottom of hip to top of the chest. Hit Space and start typing “Subdivide Multi” (Armature) and give it 2 cuts so you get 3 bones. These will form the hips, abdomen and chest bone.

After you’ve done the main spine bones, you can turn on x-axis mirror.

– Select the ball on top of the bottom bone (hips bone). Make sure Options>Armature option>X-Axis Mirror is selected, then press Shift-E to extrude mirrored bones. When you’re in mirror mode, every time you create a new bone, you’ll have a second one mirrored on the other side of the X-Axis. Remember that you’ll have to rename BOTH bones later on – if you are facing your model face-on, also remember that L is actually to the right and R is to the left, and name it accordingly.

– Arrange the leg bone into position (you may need uncheck “Connected” in order to let the leg bone go into the right position). Reposition the leg bones away from the hip. Subdivide Multi (1 cut) this leg bone into two bones, forming upperLeg and lowerLeg.

– Shift-E to extrude two more foot and toe bones, and also add in the collarbone, arms and neck+head bone. Do make sure you keep it all in a standing T-pose (as if the character is standing in the shape of the letter t).

– Ensure that all of your bones are renamed correctly as per the list. If there is an L bone there must always be a R bone.

– Go into Object Mode and Select first the character and then Shift select the armature. Press Ctrl+P and select Set Parent To – Armature Deform – With automatic weights. Your computer might lag for a second before its all connected up.

From there, you’re in the home stretch. Export your Blender model in FBX format and then import it into Unity, and in Unity set the rig to humanoid (instead of generic) and at the bottom of that, hit Apply.

Let the wild rigging begin!

See also:
Animate Anything with Mecanim

Blender Cycles: 12 Ways to Reduce Render Time

Recently my challenge was to produce 2000 x 3 frames for the production of three holograms. At 50 minutes per frame, 6000 frames would have taken a frightful 208.333333 days to render out on one already relatively speedy laptop. On the slowest mechanised computating potato in the household (5+ year old Macbook Pro), it would have taken several hours. But the real issue was not the complexity of the scene, but the way in which I had initially constructed the scene, and the necessary render settings.

After several weeks of editing (and frantically googling “HOW TO REDUCE RENDER TIME”), here is a compilation of ways that I used to reduce my render time (CPU) for a single frame from a horrible 50 minutes to 1-2 minutes. Its not an exhaustive list, but I thought it would be useful to WRITE IT ALL DOWN NOW in case the Future Debbie completely forgets everything after the hard slog of trial and error of the last few weeks..

1. Duplicate Linked Object Instances the right way

This may seem pretty obvious but I think it needs to be on the top of every list. The right shortcut to duplicate an instance of an object is ALT-D not SHIFT-D. SHIFT-D produces a completely new object with no reference back to the data of the original object. A linked object shares the same object data, so any edits you make to the first object will make all the other linked objects change as well. When you’re in a hurry it is easy to accidentally type Shift-D instead of Alt-D, but this has the potential to make a serious impact on render time.

2. Link Object Data

Let’s say you completely can’t recall if you used Shift-D or Alt-D to duplicate your objects. If you go to Mesh you’ll be able to see how many linked objects are currently using the same data. If your mesh is unique and unlinked, first select what is going to be the master, then all the other objects you want to have using its data, and press Ctrl-L whilst the mouse is over the 3D view. You’ll get the “Make Links” dropdown menu and you should select Object Data to link the objects. Other links for materials, groups, etc can also be made using this shortcut.

Note that if for some reason you do accidentally select different objects which aren’t at all similar, note that all the latter objects will be changed to have the object data of the master object anyway…

In general, I personally found it useful to assign my materials really crazy colours in the viewport so that I could see at a glance which objects were the same and which were not.

3. Clamp Direct and Indirect

Usually you end up turning up the samples for a scene because there are too many ‘fireflies’ and noise, but clamping values can quickly remove these stray white dots which appear on the render. Clamping sets the very lightest (brightest) samples to a maximum value so it removes those stray white fireflies, but in the process, it will reduce the bright “pop” or light sheen that you might have wanted to achieve with certain glossy materials.

If you set Clamp too low, it will also cause the ENTIRE scene to be too dark/dimly lit, especially if you are using HDR for Environmental Lighting, so don’t set the Clamp too low. The general advice is to start from a high number like about 10 and then work your way down to see what works for your scene. I was able to set Clamp Direct to about 4 and Clamp Indirect to about 6 and still achieve acceptable results. As for the overall “dimming” effect the Clamp will have on the scene, you can simply increase scene brightness through the compositor with a Color Balance node, or you can simply do it in post.

4. Subdivision surface

Subdivision surface is a modifier commonly used to create a smooth surface mesh from a blocky linear polygon mesh. It is done by splitting up the faces of the mesh into even smaller faces in a way that gives it a smooth appearance.

It is worth checking if you have stupidly set a subdivision surface of VERY MANY iterations for something incredibly trivial, tiny but also incidentally heavily duplicated in the scene…

5. Decimate

Did you subdivide your terrain into unwisely tiny bits and then handsculpt it with a 5px clay brush?? If there is complex modelling or you’ve applied Subdivision Surface to a mesh and are now regretting it, you can undo your CPU-killing subdiv-happy ways by decimating the meshes that you don’t need smoothing on! Add the Decimate modifier to reduce number of faces.

6. Simplify

Let’s say you’re just rendering a preview for yourself and its not the final render. You can quickly set the global max Subdivision Surface and Child Particle number here under Scene > Simplify. Just remember to uncheck the Simplify box when you’re producing the final render.

7. Delete unnecessary terrain

Set up Blender so that you can see several angles of your scene at the same time, along with the timeline if you need to scrub through it quickly. Go into Edit mode (Wireframe) and highlight the excess terrain that never appears in Camera view for the entire timeline using either B (to draw a box) or C (to paint it with a circular brush). Make sure you’re viewing in the Wireframe mode though, because if you’re viewing in Solid you’ll only be able to select the vertices that you can see, rather than all the vertices in that area regardless of whether you can see them or not.

The most handy shortcuts in 3D view are the 0 and 7 button:
0 is Camera View
7 is Top view
5 to toggle between Perspective view and Orthographic view

The resultant landscape will look a bit weird like this but you’ll save time not rendering all the bits. But do keep the bits which you’ll need for the light bouncing off to produce a realistic scene.

8. CPU/GPU Compute and Tile Size

If you have a Nvidia graphics card, you’ll still need to enable it in Blender’s User Preferences in order to use GPU, which can drastically cut down your render time. When GPU works, its like magic. GPU can be dramatically faster than CPU but is also limited by the total amount of VRAM on the card – so once it hits that limit the rendering process will simply fail (memory error). Also I had to dramatically rein in my expectations – I have always insisted on using desktop replacement laptops rather than a desktop for portability (especially for my kind of work) – but one has to consider that laptop GPUs generally aren’t as powerful as the ones in desktop GPUs in terms of VRAM, No. of CUDA cores, and overall speed.

It is generally said that the tile size should either be a perfect squares or factors (ie: divisible fraction) of the final resolution (having smaller bits of tiles left over is wasteful) but I think a lot more testing would be required to determine for the type of scene and type of CPU/GPU. Generally, if you reduce tile size too small, it incurs more overheads of switching between tiles. You should experiment with the numbers and see what works for you…

What worked for my scenes (Intel Core i7-7700HQ @ 2.80GHz / Nvidia GeForce GTX 1060 6 GB GDDR5):
For CPU an ideal Tile size seems to be around 16×16 or 32×32
For GPU an ideal Tile size seems to be 312×312

9. Number of AA Samples / Progressive Refine

The number of AA (Anti Aliasing) samples will increase render time exponentially, and this was largely why my first renders were taking 50 MINUTES PER FRAME even on the best laptop in the entire household! How many samples are enough samples? How do you find out how many samples are good enough for you, visually?

Under Performance, there’s an option for Progressive Refine which will progressively show you the overall image at each sampling level. It can be slower to have to complete the entire image together but you can also stop it when you think the image is good enough. Its useful to eyeball it until you find out what number of samples you are happy with, then just use that number and uncheck progressive refine so it will be faster.

10. Resolution, Output File Location, and Output Quality

When you “double” the size of the image, you’re actually making it four times as large and your image will take 4 times the CPU/GPU to compute! When making a preview and not the final render, you can set resolution to 50%. But don’t forget to uncheck it when you are doing the final render!!! (ARGH!!!)

Make sure that you have set the correct Output file location. If you are opening the blend file for the first time on a new computer, MAKE SURE YOU HAVE RESET THIS. Blender does this thing where it doesn’t tell you the output folder/drive doesn’t exist – it will happily render but only tell you at the end of the process that it had no place to save the file.

Below that there’s also the compression/quality setting for it. For a preview you can set it lower but remember to set it back to 100% for the final render.

11. Selective Render / Animated Render Border

Whilst in Camera view (Shortcut Num-0) within 3D view, if you use the shortcut CTRL-B, you can demarcate the “Selective Render” border (which appears as a red dotted line) in camera view. To release this Selective Render Border, its CTRL-ALT-B.

Ray Mairlot’s extremely useful Animated Render Border allows you to selective render an object moving through a scene, or to even create an animated border that can be keyframed.

When using this add-on, the final output is a frame that will still be the size of the entire render resolution, but only the selective render area will have an image and the rest will be alpha transparent.

12. Use a Render Farm

Ok so after I shaved scores of minutes off the render time to 2 minutes per full resolution frame, I realised that 2 minutes x 6000 = 8.33333333 days – and that was 8.33333333 days that I certainly did not have! There are limits to what a great laptop with a good graphics card can do. When the computer is rendering – you can’t really use anything else that taxes the graphic card or processor when its rendering, so it basically disables the computer for the duration of the render.

So.. there was no other way around this – I had to use and pay for a render farm. I tried out Render Street, TurboRender and Foxrenderfarm as they were the farms which came up when you search for Blender Render Farms.

The basic process for all of them is:

– Pack external data into blend file (tick the option to automatically pack)
– Upload a zipped copy of the blend file
– Choose your render settings
– Pay the render farm some money (for slower render) or LOTS OF MONEY (for faster render)
– Monitor render progress and render quality through web interface
– Magically download completed rendered files in record time

[Do however note that the Animated Render Border add-on mentioned above in this list will not work with the render farms, but you can write to most of these render farms regarding desired plugins and they will let you know if the add-ons and plugins can or cannot be installed]

A more comprehensive review of render farms coming up in the next post…

Render Farms: Which Blender render farm is the fastest and most intuitive?

There comes a time when you can no longer delude yourself over the notion that your trusty personal computer will be able to render your frames for you within this lifetime. You can’t wait half a year to finish rendering your Blender Showreel. You can’t wait 8 days to finish a render that you need tomorrow. And you’ve also got no time, money or space to build your own render farm. So you’ll just have to pack up your work, hand over the files to a render farm, and fork out all that cash…

…AND THEN GET READY TO BE AMAZED AT HOW IT WILL CHANGE YOUR VIEW OF 3D WORK WHEN YOU KNOW NOW THAT YOU CAN RENDER ANYTHING INSTANTLY!!!

MORE TIME BLENDING, LESS LAGGING!! Yes, there’s a likelihood that your wallet might be crying after this, but your time and CPU-time savings will probably be worth every penny that you pay the render farm. It means less of “rendering for 8 hours overnight and only then discovering you have some BIG FAT PINK MISSING TEXTURE in the middle of your render and realising you’ll have to wait for ANOTHER 8 HOURS as your computer turns into an overheated whirring pumpkin again”! It means you can be more experimental in your 3D modelling work since you know you can just generate a test preview cheaply and quickly. It means that you can have all of the frames for your animation rendered out without you feeling tempted to be stingy on the number of frames, quality, or resolution.

Here are some notes of my attempts to find a decent render farm – and to get up and running with that render farm in the shortest amount of time possible. I think that the factor of “BEING ABLE TO RENDER IN SHORTEST AMOUNT OF TIME WITHOUT GETTING LOST IN A WEBSITE” is pretty important. And having NEVER used a render farm before until this point, I signed up to RenderStreet, TurboRender, and Foxrenderfarm simultaneously and sought to find out which would reveal itself to be (1) the most intuitive to use and (2) the quickest to deliver the renders.


 

RenderStreet

RenderStreet is a Bucharest-based render farm with a deceptively simple interface that doesn’t look slick but… IT WORKS. They focus on just Blender and Modo unike the other render farms which also cater to 3DSMax and a whole range of other archivis as well as video rendering. RenderStreet’s services come in two modes: On Demand which is charged at $3/CPU Hr, and One which is a flat $50/mth for CPU only rendering (which is actually extremely reasonable). The Render.st One plan is clearly very good for everyday jobs which are not rushed but they have a limit on the total render time for a frame – which is 1 hour (take note that they won’t reject your job until the actual render time runs to 1 hr, so once you see that your render exceeds an hour it is best to cancel it and save yourself the weight). I was flabbergasted at the speeds provided with the On Demand version but the costs can quickly stack up. I also liked how they render frames from an animation sequentially so if you stop, you can easily just pick things up again from that specific frame. I also really LOVE their video preview feature which allows you to preview and download your rendered frames in MP4 video format which I find very useful in checking animation output.

Debbie’s review: Would definitely use again for professional work if I had the budget.
Impressively fast renders, generates automatic video previews, also has an affordable “everyday plan” option.

TurboRender

TurboRender is a Russian render farm with a clean and logical interface, with very prominently located live chat which is useful for when you have to ask them stupid questions like “is Frame Pitch the same as Frame Step” (answer: yes it is). I really liked that there were humans replying to me (as first time user) over every question I had on the process. Their price point seems lower than Render.st’s On Demand, and their speed is decent but not blindingly fast (obviously in this cpu/gpu game the speed is money). They’re probably a good intermediate render farm to go to for everyday jobs that aren’t rushed, as their rates are very affordable. They divide your frames into blocks and task different servers with different 100 frame blocks. The issue with this is that unlike Render.st you won’t be able to preview your rendered frames sequentially along the way, as the different servers take different amounts of time to finish their individual blocks of 100 frames. But when the job is done they will email you and you can download it all. However in terms of feedback their website has the best. You can see a panel with the progress of your files on every server, and I think it was a good part of their design to include the live chat agent with a real human on every single page. It can be a little hard to stop or edit a process once it has been started though – and you might have to use the chat agent to ask someone to help you, but their staff are very responsive (instant response).

Debbie’s review: Would definitely use again for personal work as it seems incredibly affordable.
Great and reassuring live feedback on progress of job, and very responsive staff on live chat.

FoxRenderFarm

FoxRenderFarm is a Chinese Shenzhen-based render farm with a plain interface that is a little more tricky than the previous two render farms (Confusingly, some error messages may be in Chinese). I did a small test render with them but didn’t continue after a while because I was won over by Render.st. However its clear that in a pinch they would also do the job, but their interface is significantly less intuitive than Render.st and TurboRender. You’ll also need to chat with the service agents to understand if you’re doing it right or wrong. Someone from FoxRenderFarm also did drop me an email a day later to ask me if I needed help in resuming the job.

Debbie’s review: Requires more than just intuition to figure out how to use the site unlike the prior two.
Would need to speak to helpdesk to get started.


See also:
Blender Cycles: 12 Ways to Reduce Render Time

365 Days of 3D – Blender

At the start of 2016, I decided I should brush up on my 3D skills by doing a 3D render a day for practice purposes (365 DAYS OF 3D!). All the 3D things I’ve made in the past were mainly designed in Openscad and then 3D printed/photographed/digitally painted the image by hand in Photoshop and Illustrator. In the past I’ve used anything from openscad, Sketchup, Blender, Modo, Solidworks and Meshmixer to produce and edit 3D models – but oddly without having had the proper finesse or experience of knowing how to light and render scenes properly (since I was producing models for the purpose of 3D fabrication/printing – rather than for for 3D rendering). For the last few years, in a weird roundabout way I found other ways to achieve an artificial/plastic 3D rendered look without doing actual 3D rendering thus far. So the next step seems to be: what about becoming more skilled at 3D lighting/rendering so I can maybe find new ways to experiment with realism?

I decided to focus on the free tools like Blender and Meshmixer to see what would come of it (now I have also gotten into Unity). But real life and work has intervened so I didn’t quite finish a model a day. However, a roundup for my Blender experiment for the first 50 or so days seems in order here…

See also:
3653d.tumblr.com – the best of my generic renders
3653d wiki – annotated notes on my Blender process and shortcuts

Blender

It took me a week of daily use to properly internalise the foundations of Blender, after which it became a joy!… or to be very frank, almost an addiction!?? A kind of twisted, warped game where I had to make replicas of everything I saw in reality; I’d take notes on where the objects and light sources were located and how the light was bouncing off things; I walked down the streets at night and unrealistically declared to George “LOOK AT HOW MANY POINT LIGHTS THERE ARE IN THIS SCENE!”; and railed against how reality and the building standards in real life are pretty shitty – in the sense that things don’t snap onto a grid and the average space/house has so many messy joints. (Realism in 3D modelling/rendering is composed of carefully calculated mistakes…)

For the first few days of working intensively with Blender, I couldn’t understand why nothing happened when you did a left-click, and all the shortcuts I hoped to find seemed mapped to completely different things than I expected, which made it incredibly frustrating to use at first. The options are Mode dependent, meaning you must understand exactly which Mode the function you are looking for is under. (eg: you can edit and see a modifier such as boolean or subdivision surface in Edit Mode, but you cannot Apply it unless you go back into Object Mode. or if you are viewing in perspective instead of ortho, you cannot see the background image for reference). And you’ll also need a mouse for Blender to make life worth living. But when you have finally internalised all the shortcuts, Blender works like a charm and Cycles Render is surprisingly decent.

One of the best features about Blender as an open source tool is that it used to have a screencast key plugin which displayed which key you had pressed so it was ideal for making screencasts and teaching other people how to use it. Every complex application should have a function like this – it makes learning and teaching the application a lot easier! Everything I learnt about Blender came from watching people’s videos at 1.5x speed and looking at the little letters appearing in the corner of the screencast.

Sadly screencast keys was removed in 2.7 but I found a way to activate it again – and then I made my first Blender screencast!! (Complete with blandly aspirational royalty free background music!)


This had a lot of messy hidden geometry in this one but I don’t care since we’re not going to send a hairy bear to a 3D printer. And my main goal here was to capture the little finger indentations which are often left on the short velveteen fur of the bear.

George (as 3d layman) said that the visuals coming from Blender looked impressive to him considering the time taken to make them, but to be honest I haven’t become more impressed by Blender after using it – conversely I have become less impressed by the 3D modelling I have seen everywhere else, after I realised that clean geometric designs are sometimes face-slappingly simple to replicate. It wasn’t that the technology had become more awesome, it was simply that I needed to get to the point where I realised how easy it would have been to access the technology all along if I just put myself to it. I don’t know – why do I always have this idea that if it is “too easy” to learn, then it is not really true technical expertise? Or am i just being pedantic, or have I been insidiously conditioned into thinking that technical skill must be beyond me?

As an exercise, I tried to find images to replicate in 3D to test my ability to replicate a model (together with camera position and lighting) within a short time. I decided the reference images also had to be photos. Since these were meant to be quick exercises I chose simple images that I could render within a reasonable amount of time. But the point was that it was necessary to have some fixed goal in mind – to see whether or not I could achieve a specific look rather than settling for another effect when I actually wanted to attain another.

Here are the outcomes of a few of the other experiments:

#realLove

 

 

My Blender Render

 

A delightfully simple lighting and metal shading exercise based on a the set design for the new PC music track featuring Chinese singer Chris Lee (李宇春) (track produced by A. G. Cook)

#miamiArtFairBlackBox

Ended up on some hyperallergic “Concise guide to the 2015 Miami Art Fair”, an event I have never cared for or desired to attend in the past or future, and did not know why I had ended up on this page after reading something art or design related – but it doesn’t even matter anyway. Nothing matters anyway.

The image was titled “Shannon Ebner at ICA Miami (via icamiami.org)”. I have not googled it but I’ll assume it was the artist whose typographic work is being shown in that black room.

 

My Blender Render
Woo and yay, I’ve replicated this crushingly boring typographic exhibition space in Blender.

#drivenToAbstraction

I saw this clean image and had just spent some time reading about Index of Refraction, which is how transparent materials such as glass and acrylic/plastic bend light. A good list can be found here IOR Values, and plastic tends to be about 1.46 to 1.5. So I thought I’d just see how long it would take to reproduce this image.

A further closeup was found of it, which I had reverse Google Image searched for and then traced back to a Catherine Lossing as the photographer.

 

My Half Hour Blender Render

And yes it turns out it took just half an hour to make this. From scratch. Almost depressing, isn’t it? This almost feels like taking the piss. I could easily improve the lighting reflection/refraction (which you can see in the original photo) as well if I spent more time on lighting but a rough proof-of-concept is sufficient for this exercise.

#whiteCubeGate


Reference taken from a random image I saw on Google Images whilst searching for something else entirely. I decided to just focus on the space and lighting, so I didn’t really care for reproducing the artwork beyond simple monogrammatic blocks stuck to the wall.

To be honest, I didn’t even know Theaster Gates was the name of the artist when I did this. Instead, whilst making this, I thought to myself, “hmm why does this weird “Theaster Gates” gallery look like the White Cube in Bermondsey?….” After which I googled it and yes it is White Cube… What can I say – I don’t follow the London art ting so closely and I was not in London in 2011 when the gallery first opened and this exhibition was on. Sorry if it sounds ignorant (but actually, I’m not even really sorry, so…)

 

My Blender Render

WHO NEEDS TO GO TO REAL GALLERIES ANYMORE
WHEN ANYONE CAN GENERATE AND DISTRIBUTE IMAGES
OF 3D RENDERED TUMBLR ART GALLERIES ON INSTAGRAM??

In next week’s update on 365 Days of 3D: Meshmixer and Non-manifold Nightmares…