Blender Cycles: 12 Ways to Reduce Render Time

Recently my challenge was to produce 2000 x 3 frames for the production of three holograms. At 50 minutes per frame, 6000 frames would have taken a frightful 208.333333 days to render out on one already relatively speedy laptop. On the slowest mechanised computating potato in the household (5+ year old Macbook Pro), it would have taken several hours. But the real issue was not the complexity of the scene, but the way in which I had initially constructed the scene, and the necessary render settings.

After several weeks of editing (and frantically googling “HOW TO REDUCE RENDER TIME”), here is a compilation of ways that I used to reduce my render time (CPU) for a single frame from a horrible 50 minutes to 1-2 minutes. Its not an exhaustive list, but I thought it would be useful to WRITE IT ALL DOWN NOW in case the Future Debbie completely forgets everything after the hard slog of trial and error of the last few weeks..

1. Duplicate Linked Object Instances the right way

This may seem pretty obvious but I think it needs to be on the top of every list. The right shortcut to duplicate an instance of an object is ALT-D not SHIFT-D. SHIFT-D produces a completely new object with no reference back to the data of the original object. A linked object shares the same object data, so any edits you make to the first object will make all the other linked objects change as well. When you’re in a hurry it is easy to accidentally type Shift-D instead of Alt-D, but this has the potential to make a serious impact on render time.

2. Link Object Data

Let’s say you completely can’t recall if you used Shift-D or Alt-D to duplicate your objects. If you go to Mesh you’ll be able to see how many linked objects are currently using the same data. If your mesh is unique and unlinked, first select what is going to be the master, then all the other objects you want to have using its data, and press Ctrl-L whilst the mouse is over the 3D view. You’ll get the “Make Links” dropdown menu and you should select Object Data to link the objects. Other links for materials, groups, etc can also be made using this shortcut.

Note that if for some reason you do accidentally select different objects which aren’t at all similar, note that all the latter objects will be changed to have the object data of the master object anyway…

In general, I personally found it useful to assign my materials really crazy colours in the viewport so that I could see at a glance which objects were the same and which were not.

3. Clamp Direct and Indirect

Usually you end up turning up the samples for a scene because there are too many ‘fireflies’ and noise, but clamping values can quickly remove these stray white dots which appear on the render. Clamping sets the very lightest (brightest) samples to a maximum value so it removes those stray white fireflies, but in the process, it will reduce the bright “pop” or light sheen that you might have wanted to achieve with certain glossy materials.

If you set Clamp too low, it will also cause the ENTIRE scene to be too dark/dimly lit, especially if you are using HDR for Environmental Lighting, so don’t set the Clamp too low. The general advice is to start from a high number like about 10 and then work your way down to see what works for your scene. I was able to set Clamp Direct to about 4 and Clamp Indirect to about 6 and still achieve acceptable results. As for the overall “dimming” effect the Clamp will have on the scene, you can simply increase scene brightness through the compositor with a Color Balance node, or you can simply do it in post.

4. Subdivision surface

Subdivision surface is a modifier commonly used to create a smooth surface mesh from a blocky linear polygon mesh. It is done by splitting up the faces of the mesh into even smaller faces in a way that gives it a smooth appearance.

It is worth checking if you have stupidly set a subdivision surface of VERY MANY iterations for something incredibly trivial, tiny but also incidentally heavily duplicated in the scene…

5. Decimate

Did you subdivide your terrain into unwisely tiny bits and then handsculpt it with a 5px clay brush?? If there is complex modelling or you’ve applied Subdivision Surface to a mesh and are now regretting it, you can undo your CPU-killing subdiv-happy ways by decimating the meshes that you don’t need smoothing on! Add the Decimate modifier to reduce number of faces.

6. Simplify

Let’s say you’re just rendering a preview for yourself and its not the final render. You can quickly set the global max Subdivision Surface and Child Particle number here under Scene > Simplify. Just remember to uncheck the Simplify box when you’re producing the final render.

7. Delete unnecessary terrain

Set up Blender so that you can see several angles of your scene at the same time, along with the timeline if you need to scrub through it quickly. Go into Edit mode (Wireframe) and highlight the excess terrain that never appears in Camera view for the entire timeline using either B (to draw a box) or C (to paint it with a circular brush). Make sure you’re viewing in the Wireframe mode though, because if you’re viewing in Solid you’ll only be able to select the vertices that you can see, rather than all the vertices in that area regardless of whether you can see them or not.

The most handy shortcuts in 3D view are the 0 and 7 button:
0 is Camera View
7 is Top view
5 to toggle between Perspective view and Orthographic view

The resultant landscape will look a bit weird like this but you’ll save time not rendering all the bits. But do keep the bits which you’ll need for the light bouncing off to produce a realistic scene.

8. CPU/GPU Compute and Tile Size

If you have a Nvidia graphics card, you’ll still need to enable it in Blender’s User Preferences in order to use GPU, which can drastically cut down your render time. When GPU works, its like magic. GPU can be dramatically faster than CPU but is also limited by the total amount of VRAM on the card – so once it hits that limit the rendering process will simply fail (memory error). Also I had to dramatically rein in my expectations – I have always insisted on using desktop replacement laptops rather than a desktop for portability (especially for my kind of work) – but one has to consider that laptop GPUs generally aren’t as powerful as the ones in desktop GPUs in terms of VRAM, No. of CUDA cores, and overall speed.

It is generally said that the tile size should either be a perfect squares or factors (ie: divisible fraction) of the final resolution (having smaller bits of tiles left over is wasteful) but I think a lot more testing would be required to determine for the type of scene and type of CPU/GPU. Generally, if you reduce tile size too small, it incurs more overheads of switching between tiles. You should experiment with the numbers and see what works for you…

What worked for my scenes (Intel Core i7-7700HQ @ 2.80GHz / Nvidia GeForce GTX 1060 6 GB GDDR5):
For CPU an ideal Tile size seems to be around 16×16 or 32×32
For GPU an ideal Tile size seems to be 312×312

9. Number of AA Samples / Progressive Refine

The number of AA (Anti Aliasing) samples will increase render time exponentially, and this was largely why my first renders were taking 50 MINUTES PER FRAME even on the best laptop in the entire household! How many samples are enough samples? How do you find out how many samples are good enough for you, visually?

Under Performance, there’s an option for Progressive Refine which will progressively show you the overall image at each sampling level. It can be slower to have to complete the entire image together but you can also stop it when you think the image is good enough. Its useful to eyeball it until you find out what number of samples you are happy with, then just use that number and uncheck progressive refine so it will be faster.

10. Resolution, Output File Location, and Output Quality

When you “double” the size of the image, you’re actually making it four times as large and your image will take 4 times the CPU/GPU to compute! When making a preview and not the final render, you can set resolution to 50%. But don’t forget to uncheck it when you are doing the final render!!! (ARGH!!!)

Make sure that you have set the correct Output file location. If you are opening the blend file for the first time on a new computer, MAKE SURE YOU HAVE RESET THIS. Blender does this thing where it doesn’t tell you the output folder/drive doesn’t exist – it will happily render but only tell you at the end of the process that it had no place to save the file.

Below that there’s also the compression/quality setting for it. For a preview you can set it lower but remember to set it back to 100% for the final render.

11. Selective Render / Animated Render Border

Whilst in Camera view (Shortcut Num-0) within 3D view, if you use the shortcut CTRL-B, you can demarcate the “Selective Render” border (which appears as a red dotted line) in camera view. To release this Selective Render Border, its CTRL-ALT-B.

Ray Mairlot’s extremely useful Animated Render Border allows you to selective render an object moving through a scene, or to even create an animated border that can be keyframed.

When using this add-on, the final output is a frame that will still be the size of the entire render resolution, but only the selective render area will have an image and the rest will be alpha transparent.

12. Use a Render Farm

Ok so after I shaved scores of minutes off the render time to 2 minutes per full resolution frame, I realised that 2 minutes x 6000 = 8.33333333 days – and that was 8.33333333 days that I certainly did not have! There are limits to what a great laptop with a good graphics card can do. When the computer is rendering – you can’t really use anything else that taxes the graphic card or processor when its rendering, so it basically disables the computer for the duration of the render.

So.. there was no other way around this – I had to use and pay for a render farm. I tried out Render Street, TurboRender and Foxrenderfarm as they were the farms which came up when you search for Blender Render Farms.

The basic process for all of them is:

– Pack external data into blend file (tick the option to automatically pack)
– Upload a zipped copy of the blend file
– Choose your render settings
– Pay the render farm some money (for slower render) or LOTS OF MONEY (for faster render)
– Monitor render progress and render quality through web interface
– Magically download completed rendered files in record time

[Do however note that the Animated Render Border add-on mentioned above in this list will not work with the render farms, but you can write to most of these render farms regarding desired plugins and they will let you know if the add-ons and plugins can or cannot be installed]

A more comprehensive review of render farms coming up in the next post…

A Visit to Geola: General Optics Laboratory – Pulsed Laser Holography

20170720_102630

When I was in Canberra as artist-in-residence with the Australian War Memorial I managed to see some really amazing holograms (with many thanks to the Australian War Memorial for arranging this and National Gallery of Australia for allowing me to see their collections!). Thus I began hatching a crazy plan to make some holograms, which led me to travel to Vilnius to visit Geola (“General Optics Laboratory”), a company which has been producing analogue holography as well as developing a really interesting technique of digital holography using pulsed lasers. Geola’s pioneering holography techniques had also been mentioned by a number of Australian fine art holographers such as Paula Dawson.

(It is always worth noting that for a moment in time, the hologram had really seemed poised to be the successor to the photograph, with many fine art holography programmes developed in universities around the world including in Australia and UK between the 1970s and 1990s – even the RCA used to have a holography department)

Untitled Untitled Untitled
Margaret Benyon’s Totem (1979)
National Gallery of Australia – Accession No: NGA 2009.46
Materials & Technique: photographs, reflection hologram, ink, gouache, feather on paper
Dimensions: printed image 25.4 h x 20.3 w cm / Produced in Australia
Notice how the hologram actually remains a secret if the work is not lit or viewed from the right direction.

Untitled
Hologram made by Andrea Wise to test conservation techniques for holographic plates
Canberra, April 2017
I think what interested me most was how Geola had interpreted the method of digital holography into holopixels. One of the senior conservators at the National Gallery of Australia, Andrea Wise (who also had a passion for understanding how holography worked), told me of a useful way of thinking about holograms: if you break a corner off a hologram, that corner itself will already contain the data of the entire image. The concept of the holopixel then makes this fragment-whole relationship evidently clear: each holopixel is a separate element but each of the holopixels contains ALL of the image data at the same time – they are optical elements that when properly illuminated and viewed from different angles, will be perceived as a specific colour dot. When we view all the colour dots as a whole, it becomes interpreted by the eye and brain as an image that changes when viewed from different angles. (Viewing it with two eyes completes the illusion of our perception of the image as a three-dimensional scene).

Although the underlying physics is well-known and widely understood by scientists and students of science alike, the hologram somehow remains largely misunderstood by the average layman. Since the hologram exists as a physical photographic plate, it is sometimes confused as an extension of photography, although a hologram is not at all like a photograph because a photograph is an image but the hologram is a lens. Furthermore, today the word “hologram” is very loosely used to describe so many optical illusions (eg: pepper’s ghost, rear projection, volumetric projection, lenticular prints, virtual reality) to the point that most people may may not know what a hologram really is. When I tried to talk about my plans for the project to other friends, quite often a friend might say “Oh! Holograms! I’ve seen/made some before!” only for us to discover later on that what they thought was a hologram was not actually a hologram…

Even American electrical engineering professor Emmeth Leith, the co-inventor of three-dimensional holography, described his holograms as a “grin without a cheshire cat”. Over the years, three-dimensionality and then imagery was successively compromised, largely leaving only movement and colour behind. Technical limitations in holographic image production as well as certain cultural and commercial conditions have led to the overall flattening of the holographic image on both physical and symbolic level, resulting in total collapse of the holographic image to the image plane – to the point that today we mainly see the hologram in flattened embossed forms, in small particles…

Google Image Search: “Holographic”
As hologram retailers struggled to build a consumer market they began aligning themselves with science museums and technology centres to try to capture national audiences on a mass consumption level. Ultimately this distanced the hologram further and further away from being a medium for narrative. Despite having a premature demise in a commercial sense, the hologram still entered cultural consciousness as a medium designed for future mass consumption, in its general disappearance from the public eye it transformed into a staple of science fiction films and the imagination. But it is not just in people’s imagination that the hologram has been changing. Holographic techniques have also been continuously developing! You might be surprised to know that today you can produce holograms from moving images, and that they can be in full colour today!

20170720_095729

On an unexpectedly normal and ordinary street on the other side of the world, sits the rather nondescript office of the lab called Geola.

20170720_122822

It used to be that analogue holography had to be on in labs which were completely free of vibrations – so the labs would involve huge concrete tables and had to be far away from civilisation and all the vibrations from cars and noises. But Geola has devised a pulsed laser system which has no such vibration problems! (Cars are running on the roads outside! You can walk in the room with the printer inside it!)

20170720_102630

This was a room that had to be seen in person.

20170720_102814

Despite its similarity to a photo plate, a hologram is nothing at all like a photo, and there’s also no way for me to adequately represent it in photo alone.

20170720_103225

20170720_103132

This is the printer. The holopixels in this digital holograph are recorded onto photosensitive media using two pulsed laser beams – one is spatially modulated by using LCD display and focused into a 1.6mm x 1.6mm square acts as the object beam, another laser beam acts as the reference beam. The modulation is done such that the object beam at the point of interference with the reference beam contains the same information that would have come to this point from a real object (except that here we might be using film footage or 3D rendered scenes as the source). The reference beam interferes with the modulated object beam, recording the hologram of the image on the photosensitive media.

20170720_103159

After exposure the holographic photoplate is processed using a conventional photographic process.

20170720_101815

After chemical processing the photoplate is dried and then the holographic photoemulsion is protected by lamination of black self-adhesive film and acrylic sheet using a standard cold lamination machine.

20170720_114723

Real 3D Scene shot from a drone

20170720_102314

Virtual 3D Scene (as evidenced by designer who forgot to connect trees to ground)

See the video documentation here:

Thank you to Ramunas for showing me around Geola!

The Invisible Holography Section and a Holographic Reading List

I’ve been doing a considerable amount of reading and research on holography (and fine art holography) lately, so as I was passing through the National Library of Singapore the other day I decided to look in on what they had in the general lending section. Much to my surprise I saw that Holography had its own category in the dewey decimal system!

Oh sweet well let’s go to 774 then…

Alright here are all the books from 771…

And here are all the books from 775…

HOLD ON…

THERE ARE ABSOLUTELY NO BOOKS UNDER 774?


If you do search the NLB catalogue, you will find that they have got a few books on Holography at 774 – but these books are all reference books! So there are no books on holography for the masses. Oh no! So there is no chance that one might be blindly wandering through the shelves, hoping to randomly soak up ideas from library books and BLAM! A HOLOGRAPHY BOOK SECTION! But no! There isn’t any Holography Section, even though the word appears on the bookshelves. OH BUT HOW WILL WE DEVELOP (OR REVIVE?) THE HOLOGRAPHIC ARTS IN SINGAPORE THEN?

When you google for “Dewey Decimal 774” it says that 774 is no longer being used for Holography. However, if you google for the most updated version of the DDC 23 it says 774 is still Holography. Maybe the wikipedia page needs editing. (I’m not an expert on DDC to be honest)

Nevertheless there are actually a whole lot of very excellent books about holography out there, which I don’t feel have been represented here. So I’ve decided to write out Debbie’s recommended reading list if one wants to read more about holography:

A HOLOGRAPHIC READING LIST!

1. Johnston, Sean. Holograms: A Cultural History. Oxford: Oxford University Press, 2016.

If you’ve ever asked the questions “why do we no longer see holograms everywhere? why did the medium fail commercially? and why does it persist as a science fiction staple?” then this book will answer all your burning questions! This book explores what caused the rise, demise, and apparitional persistence of the hologram in visual culture. Its publication was preceded by Johnston’s 2006 book, Holographic Visions: A History of New Science, which is also definitely worth a read.

2. Schröter, Jens. 3D: History, Theory and Aesthetics of the Transplane Image. London: Bloomsbury Academic, 2014.

Beyond the term hologram, we can think of the 3D image as a transplane image, which Schroter’s book attempts to trace through history and theory. Schroter also edited another book “Das Holographische Wissen” edited with Stefan Rieger but alas I do not read German.

3. Falk, David S., Dieter R. Brill, and David G. Stork. “Holography.” Seeing the Light: Optics in Nature, Photography, Color, Vision and Holography. N.Y.: Harper and Row, 1986, 368-393.

There are tons of hardcore mathematical books explaining holography (interference pattern of coherent light is by now so well known to man and a staple in physics classrooms) but this remains one of the most classic textbook explaining the maths and optics behind much of photography and holography (Chapter 14). Practical yet poetic and accessible for readers even without maths or science backgrounds.

Other recommended readings:

(These are books which I’ve personally found useful in thinking about scientific approaches, technological innovations, and military technology’s influence in ways of producing images / transplane images / art)

1. Galison, Peter and Caroline A. Jones, eds. Picturing Science, Producing Art. New York: Routledge, 1998.

2. Virillo, Paul. The Vision Machine. London: British Film Institute, 1994.

3. Bishop, Ryan and John Phillips. Modernist Avant-Garde Aesthetics and Contemporary Military Technology: Technicities of Perception. Edinburgh University Press, 2010.


To be honest the above list is simply whatever I’ve enjoyed reading recently – so if you asked me to compile another holographic reading list in a few months time I expect the list will have expanded over time… so this….. IS TO BE UPDATED IN THE FUTURE!