Bread and Butter in a Field of Dreams (Coming July 2021)

This July, I’ll be releasing a Free-to-play interactive experience titled “Bread & Butter In a Field of Dreams” for Mac/Win Desktop. But actually, you could say that this project originated as a project under a different name – “The Legend of Debbie“…

Do you want to get a reminder when
“Bread & Butter in a Field of Dreams”
is released for download,
or to hear first about
Debbie’s upcoming projects?
Join Debbie’s newsletter for all DBBD updates!


The Legend of Debbie” was originally made as a commission for Asian Film Archive’s State of Motion in January 2021 and it was my way of trying to use the archive of my own artwork as the source material for a sprawling game, exploring the different works as strange portals transporting you to weird spatialised versions of the works, and splicing my works with a partially fictionalised narrative (approximately 25% fiction, 75% reality).

The titular “legend” for the work was this directory which categorised my works into many different categories. A map legend. When I had time I was going to put more symbols all over the place, maybe have a little radar map overhead as well. I also had a lot of fun designing different rooms to represent different works.

I originally wanted to design a LIVE VR experience for the “The Legend of Debbie” and rather than to release the game (because this would take so much more testing for the development side of the project rather than running it as a moderated tour), I would run it as a live event (workshop) where participants could come down in different timeslots to experience this VR game (facilitated by myself)….

Imagine how fun it would be rolling through these odd spaces…

But then the Phase 2 Heightened Measures kicked in again, so we couldn’t have live events like this anymore. So… I did not make a VR version for “The Legend of Debbie”. And in any case, there was something that disturbed me about the final presentation of Legend.


I have come to the conclusion that there is no room for nuance. Or maybe I am not very good at nuance (it is something I am working on, but I suspect that nuance does not come easily to me mainly because my real life personality is too excitable and shouty and maybe a bit childlike and overly earnest at heart).

Instead of developing The Legend further, I somehow ended up making a completely new game from scratch. One in which very deliberately NONE of the works were shown in the game world in their original form, besides the first room which replicates the Wikicliki exhibition by the Singapore Art Museum, currently in the Ngee Ann Kongsi Concourse Gallery (Basement) of National Gallery Singapore. The show runs until 11 July 2021.

Since we couldn’t be in the gallery itself for the talk, I had re-created the gallery for a talk on 29th May (A conversation between myself and curator Mustafa, whom I have worked closely with during the last few months.) Instead of boring slides, based on the items that Mustafa was interested in discussing about, I brought them into the gallery space through the various 3D modelled props on a table, including a few laptops handily scrolling through my actual Wikicliki and a spreadsheet of the Here the River Lies cards (many credits to George for painstakingly digitizing them).

From this totally realistic representation of a real exhibition you eventually get teleported to another world where there are lots of objects which are directly representative of the projects I’ve worked on over the last 10 years, but nothing is represented in the original form that it was made.

In the world of the Field of Dreams, every single artwork I have made in the last 10 years is turned into a transmogrified version of itself – a pop translation of the work which could comfortably exist within a commercially lucrative museum retail shop (a la MOMA shop or NAIISE or any one of those shiny design shops)… or in a dusty basement reading room within an alternative community-based establishment for which there is no lack of heart but financial viability is always a question (such as The Substation’s Random Room).

Somehow making art is an act of translation for me. I don’t really seem to start by drawing or sketch, but by writing, and then I have to translate that into sketches, and from sketches into whatever digital medium I am doing. And this act of translation seems so arbitrary at times. Many ideas could have turned out differently had I chosen to make them in a different medium. Perhaps this expresses the tension I feel between making work as an artist and work as a designer/design educator (which earns me my living). The art can be poetic and ruminative and open-ended whereas the design has to fulfill the brief requirements and ultimately has to be functional (and most likely measurable).

So I thought that instead of a Space Geode rendered all in white, I would have a mildly tacky Space Geode Citrus Squeezer; instead of The Library of Pulau Saigon, its various components would be turned into functional items such as a Tic-tac-toe-set featuring the Chinese Spoon as the naughts and the Political Party Badge as the zeroes (something with the potential to be a slightly tacky coffee table centerpiece). My pulsed laser holography work, “War Fronts” would be rendered instead as a Jigsaw set. And instead of my print of 100 of my dreams from my Dream Syntax book, I turned it into a Scratch-off-chart of the 100 dreams. Because scratch off maps are all the rage now on everyone’s internet shopping list, aren’t they?

Along the way I er…. got a bit too excited because who needs to write a book when you can just make the cover for the book? I was churning out dozens and dozens of pdf book cover textures to populate the DBBD SHOP.

So, perhaps we can’t quite call this work “The Legend of Debbie 2.0” anymore. Maybe this should be called by the name that seems more appropriate for it now: Bread & Butter in The Field of Dreams.

The work takes its name from a 2013 ACES study by the NAC – apparently the first survey of its kind done on arts and cultural workers to examine how on earth do they make their living. I do not know which unnamed arts/cultural worker would give the survey such an evocative name, but here I have made the breads and butters literal, to be collected up before you can gain entry to the next scene.

Special mention also goes to another big survey I participated in not too long ago, which asked artists some very sobering questions about what we thought had advanced our artistic careers or had inhibited our careers, with a dropdown list of items that could potentially limit our careers being twice as long as the advancing list. (In an earlier iteration of the study, it suggested that we dig up our past 10 years of tax returns to examine the difference between our art-income and non-art income. Me, I almost thought this was like some cruel form of “formative assessment” – “Alright, you got me, I’ve NOT been solely living off my earnings as an artist, and in fact, at times this whole “art” thing is frequently a complete loss leader operation!”) I have many ambivalent feels about this. One one hand, my desire to make art isn’t about the money, but on the other hand I also do want to fix the current state of affairs…

There’s a maze and some other weird shizz coming up…

The world is still very much a work-in-progress and I look forward to fleshing it over for July’s “workshop” and to be able to release it as a free game for download! My goal is a release by July 2021! Me thinks I might even do it as a gameplay video – I quite enjoyed this live stream (ticketed as a workshop, but really more like a twitch stream with me having set up OBS and all the ridiculous animated overlays and chats)

I also did another detailed breakdown of the time I spent on this last week using Rescuetime. Rescuetime tracks the time I spend in each app and it is handy in that it breaks down the time I spend into working hours (defined as 9am-6am) and non-working hours (6am-9am) so I can sift out the time I spend on personal projects versus time on my day job. My secret to ekeing out the time is usually to work for 1-2 hrs after Beano sleeps at night and wake at about 4-5am to work.

It goes to show that despite working full time and having a time-consuming baby bean (with help of dingparents dutifully caring for her whilst I work), it is still possible to eke out the time to maintain an active artistic practice if one has the will to do so (and the disclipline to wake up early).

It does feel like a culmination of 3D skills I have taken years to acquire:
2014: when I realised how non-existent my 3D design skills were
2016: when I made myself try to make one blender render a day
2017: intentionally producing new works using 3D
2019: intentionally producing new works in Unity (very basic at that stage)
2020: taking the Unity Developer cert at my workplace, supervising more Unity-based projects
2021: being able to build things like this in a week (on top of a seperate full-time job)

I’ve seen several GDC talks and devlog videos on youtube detailing how every successful game dev probably has dozens of “failed games” before they finally make the one game they are happy with, that one breakthrough game. Likewise I don’t expect Field of Dreams to be perfect on its July 2021 release but I hope to collect lots and lots of feedback after releasing it so I can improve the experience!

Do you want to get a reminder when
“Bread & Butter in a Field of Dreams”
is released for download,
or to hear first about
Debbie’s upcoming projects?
Join Debbie’s newsletter for all DBBD updates!


Space Geodes at Ota Fine Arts Singapore (4 August 2018 – 15 September 2018)


The show at Ota Fine Arts is all set up, with many thanks to Jodi for inviting me to show the work. I’ve shown this work two times but this is the first time I had the option of REAL PLINTHS. I previously used all acrylic casings as plinths. At the time it was a practical decision as I was using whatever unwanted ‘plinth’-like items I could find and The Substation was getting rid of these old casings – but also it was a consistent material to the rest of the work. Plastic upon more plastic!

[PS you can read more of my writing about the work here as well:]
Space Geodes: On the 3D Printed prototype as Digital Fossil
Space Geodes at Singaplural 2016
Public Service Notice about Geodes

Left: Space Geodes at Singaplural 2016. Right: Space Geodes at Objectifs in 2017.

Space Geodes at Ota Fine Arts in 2018
Given free choice over the colour that I would want the plinth to be, I’d always choose Grey as a neutral base over White or Black. We chuckled over the names given to the colours and I have to admit I was almost tempted to choose a colour simply because it was named “GRANITE ROCK” or “SLATE GRAY”. (Ultimately if the names given to the colours by savvy paint companies were totally ignored, the choice would have been very clear to me anyway; it was always going to be a specific warm mid-range sort of grey for which I don’t have a name but can always pick out of a lineup)


I did give the arrangement more thought this time around. Recently I’ve been enjoying laser cutting a lot because I now have access to a lasercutter in the NYP Makerspace which is literally a 5 minute walk from where I am staying at the moment (and its under-utilised!) so as a simple experiment I tried to make an acrylic base/riser which would also light the work from beneath.


Geode with base

The only reason I haven’t gone with this lighting option is the fact that there is a little colour discrepancy in the “white” when it is lit. My lights and acrylics are too “laser white”, whereas the work glows with a warm white. Weirdly enough, some of the works looked more yellow when lit, as if they differed in thickness, which I couldn’t understand to be the case since they were designed as hollow shells of the same thickness for the SLS process (to save on material cost)


The answer as to why there was a discrepancy in thickness and lighting became clearer later. As I was arranging the works yesterday, POWDER STARTED COMING OUT OF SOME OF THE WORKS. The powder had been thicker in some portions so that was why the lighting was not consistent. Having shown the works two times before, I was surprised that powder was draining out now when I’d have expected any excess powder to come out of the work in the previous round. Perhaps it was all the transportation and vigorous moving about that dislodged the excess powder hiding inside the print, for the white nylon powder began issuing forth from the escape holes I had designed for the works.

Perhaps on previous viewings we had treated the works so very softly and cautiously as if we were handling live explosives – but this time around I put them in a basket for rocks and slung them over my shoulder as I carried them to the gallery.


For those unfamiliar with the Selective Laser Sintering (SLS) process, it is an additive manufacturing process in which the laser sinters the powder into a solid material, but because the material itself is quite costly, designers often design the part as a hollow part with some escape holes so the excess powder can be shaken out. I would have thought that all the powder from before had been shaken out by now!

Its a bit funny as come to think of it the white powder flowing out visually resembles a weathering process in which the rocks break down into smaller particles. Earlier in the day I was also just building a prototype for a new work in which one can see material flowing in a similar way. When something breaks down into particles that small, the dust is literally blown into the wind. There’s no “trying to collect it in a cup and sticking it back together”. Its just gone, blown away, it ceases to be an identifiable part of the thing it was once part of.

Prototype for a new work
The private view for the group show is tonight – please come down to see if it you’re in town!


The exhibition will be on view from August 4 through September 15, 2018 at Ota Fine Arts, 7 Lock Road, #02-13 Gillman Barracks, Singapore 108935.

Kray Chen | Sheryl Chua | Debbie Ding | Hilmi Johandi | Tristan Lim | Ian Tee
4 August – 15 September 2018

Opening Reception in the presence of the artists:
Friday, 3 Aug 2018, 6.30 – 8.30 pm

Ota Fine Arts Singapore is delighted to present SPACES, a group exhibition featuring 6 artists from Singapore: Kray Chen, Sheryl Chua, Debbie Ding, Hilmi Johandi, Tristan Lim and Ian Tee. This exhibition showcases each artist’s reaction to the spaces and structures in contemporary society, as well as a more formal focus on pictorial space. From painting to photography, video, 3D print and textile work, diverse expressions by the artists discuss relations between the virtual/imaginary and actual spaces.

Lessons in 3D Printing: Raft Vs Brim Vs Skirt



Designing and teaching a 3D module has had the unintended effect of giving myself the role of “de facto 3D printing technician” and “general-all-round-fixer of 3D models and print settings”. Ah! I am a fool, for I had already known of the fickle nature of 3D printers, their unwillingness to behave when ordered to. Also the machine is a wilfully obtuse device that will do exactly what the designer asked it to do even if the designer has made an awful mistake.

Here is a list of lessons I have learnt after facilitating several hundred hours of 3D Printing in a mini workshop. Printers used were Raise3D N2.

1. Raft Vs Brim Vs Skirt? JUST USE A RAFT

What is the difference between using a Raft, Brim and Skirt?


The Raft is an additional piece below the entire print itself.


The Brim is an extension of the first layer which expands the amount of contact the first layer has with the print bed. The print’s first layer is already touching the bed itself, unlike the raft which is an additional few layers below the print itself.


The Skirt is just an additional line printed around the print itself.

Often one may think that the raft is a waste of material but I have found that rafts are absolutely essential since it will affect the first layer adhesion greatly, and with this kind of FDM printers the first layer adhesion is one of the main factors which will literally make or break your entire print. As the raft is much bigger than the print’s footprint (and also bigger and thicker than a brim), it ensures better adhesion to the print bed.

Also if you end up having difficulty in removing the print, you can afford to damage the raft somewhat during the process of removal – without also damaging the actual print.


Example of failed print: This generally flat print would not complete when I printed it with just a skirt. It would only print when I did it with a raft. For a large print, this kind of error may cause the print to turn into a huge blob that can potentially be pushed into the axis belt – breaking it and causing total breakdown of the machine. So you can imagine how these problems with adhesion shouldn’t be taken lightly if you want to keep on printing!

2. Infill Density
10% will do for non-load bearing parts all the time. Don’t bother printing more infill unless needed.

3. Always Check Slicing Preview
Check Slicing Preview for EVERY PRINT before printing. More often than not it will provide the clues for whether a print will actually complete. A common issue is where the work is not touching the print bed. If there is a gap in a layer, or no raft over any part, or any part which defies gravity and the known laws of physics, then do not proceed for THIS WILL NOT PRINT WELL. Redesign the part before printing.

4. Tricks for very large overnight prints
Lay item as flat as possible on the print bed
Ensure that you have observed the raft completing successfully before leaving the print to run overnight.
If you don’t want to do a raft because a raft would exceed the print bed size, set the first layer to be extra thin.

5. Only Cura can slice right-extruder-only print for dual extruder printer
A crazy question you may find yourself asking if you are using a dual extruder is “should I try to use just the right extruder to print?” Well, no, its not advisable at all, and also I can’t seem to find any other slicing app which allows for such a setting. Alternatively, you can dive into the gcode directly and change all instances of T0 to T1. However the problem remains that right-extruder-only prints may experience nozzle strike:

Untitled Untitled
Left: Nozzle Strike in progress. Right: Print which I stopped because the left extruder was clearly impacting on the print being produced with the right extruder only.

Blender Cycles: 12 Ways to Reduce Render Time

Recently my challenge was to produce 2000 x 3 frames for the production of three holograms. At 50 minutes per frame, 6000 frames would have taken a frightful 208.333333 days to render out on one already relatively speedy laptop. On the slowest mechanised computating potato in the household (5+ year old Macbook Pro), it would have taken several hours. But the real issue was not the complexity of the scene, but the way in which I had initially constructed the scene, and the necessary render settings.

After several weeks of editing (and frantically googling “HOW TO REDUCE RENDER TIME”), here is a compilation of ways that I used to reduce my render time (CPU) for a single frame from a horrible 50 minutes to 1-2 minutes. Its not an exhaustive list, but I thought it would be useful to WRITE IT ALL DOWN NOW in case the Future Debbie completely forgets everything after the hard slog of trial and error of the last few weeks..

1. Duplicate Linked Object Instances the right way

This may seem pretty obvious but I think it needs to be on the top of every list. The right shortcut to duplicate an instance of an object is ALT-D not SHIFT-D. SHIFT-D produces a completely new object with no reference back to the data of the original object. A linked object shares the same object data, so any edits you make to the first object will make all the other linked objects change as well. When you’re in a hurry it is easy to accidentally type Shift-D instead of Alt-D, but this has the potential to make a serious impact on render time.

2. Link Object Data

Let’s say you completely can’t recall if you used Shift-D or Alt-D to duplicate your objects. If you go to Mesh you’ll be able to see how many linked objects are currently using the same data. If your mesh is unique and unlinked, first select what is going to be the master, then all the other objects you want to have using its data, and press Ctrl-L whilst the mouse is over the 3D view. You’ll get the “Make Links” dropdown menu and you should select Object Data to link the objects. Other links for materials, groups, etc can also be made using this shortcut.

Note that if for some reason you do accidentally select different objects which aren’t at all similar, note that all the latter objects will be changed to have the object data of the master object anyway…

In general, I personally found it useful to assign my materials really crazy colours in the viewport so that I could see at a glance which objects were the same and which were not.

3. Clamp Direct and Indirect

Usually you end up turning up the samples for a scene because there are too many ‘fireflies’ and noise, but clamping values can quickly remove these stray white dots which appear on the render. Clamping sets the very lightest (brightest) samples to a maximum value so it removes those stray white fireflies, but in the process, it will reduce the bright “pop” or light sheen that you might have wanted to achieve with certain glossy materials.

If you set Clamp too low, it will also cause the ENTIRE scene to be too dark/dimly lit, especially if you are using HDR for Environmental Lighting, so don’t set the Clamp too low. The general advice is to start from a high number like about 10 and then work your way down to see what works for your scene. I was able to set Clamp Direct to about 4 and Clamp Indirect to about 6 and still achieve acceptable results. As for the overall “dimming” effect the Clamp will have on the scene, you can simply increase scene brightness through the compositor with a Color Balance node, or you can simply do it in post.

4. Subdivision surface

Subdivision surface is a modifier commonly used to create a smooth surface mesh from a blocky linear polygon mesh. It is done by splitting up the faces of the mesh into even smaller faces in a way that gives it a smooth appearance.

It is worth checking if you have stupidly set a subdivision surface of VERY MANY iterations for something incredibly trivial, tiny but also incidentally heavily duplicated in the scene…

5. Decimate

Did you subdivide your terrain into unwisely tiny bits and then handsculpt it with a 5px clay brush?? If there is complex modelling or you’ve applied Subdivision Surface to a mesh and are now regretting it, you can undo your CPU-killing subdiv-happy ways by decimating the meshes that you don’t need smoothing on! Add the Decimate modifier to reduce number of faces.

6. Simplify

Let’s say you’re just rendering a preview for yourself and its not the final render. You can quickly set the global max Subdivision Surface and Child Particle number here under Scene > Simplify. Just remember to uncheck the Simplify box when you’re producing the final render.

7. Delete unnecessary terrain

Set up Blender so that you can see several angles of your scene at the same time, along with the timeline if you need to scrub through it quickly. Go into Edit mode (Wireframe) and highlight the excess terrain that never appears in Camera view for the entire timeline using either B (to draw a box) or C (to paint it with a circular brush). Make sure you’re viewing in the Wireframe mode though, because if you’re viewing in Solid you’ll only be able to select the vertices that you can see, rather than all the vertices in that area regardless of whether you can see them or not.

The most handy shortcuts in 3D view are the 0 and 7 button:
0 is Camera View
7 is Top view
5 to toggle between Perspective view and Orthographic view

The resultant landscape will look a bit weird like this but you’ll save time not rendering all the bits. But do keep the bits which you’ll need for the light bouncing off to produce a realistic scene.

8. CPU/GPU Compute and Tile Size

If you have a Nvidia graphics card, you’ll still need to enable it in Blender’s User Preferences in order to use GPU, which can drastically cut down your render time. When GPU works, its like magic. GPU can be dramatically faster than CPU but is also limited by the total amount of VRAM on the card – so once it hits that limit the rendering process will simply fail (memory error). Also I had to dramatically rein in my expectations – I have always insisted on using desktop replacement laptops rather than a desktop for portability (especially for my kind of work) – but one has to consider that laptop GPUs generally aren’t as powerful as the ones in desktop GPUs in terms of VRAM, No. of CUDA cores, and overall speed.

It is generally said that the tile size should either be a perfect squares or factors (ie: divisible fraction) of the final resolution (having smaller bits of tiles left over is wasteful) but I think a lot more testing would be required to determine for the type of scene and type of CPU/GPU. Generally, if you reduce tile size too small, it incurs more overheads of switching between tiles. You should experiment with the numbers and see what works for you…

What worked for my scenes (Intel Core i7-7700HQ @ 2.80GHz / Nvidia GeForce GTX 1060 6 GB GDDR5):
For CPU an ideal Tile size seems to be around 16×16 or 32×32
For GPU an ideal Tile size seems to be 312×312

9. Number of AA Samples / Progressive Refine

The number of AA (Anti Aliasing) samples will increase render time exponentially, and this was largely why my first renders were taking 50 MINUTES PER FRAME even on the best laptop in the entire household! How many samples are enough samples? How do you find out how many samples are good enough for you, visually?

Under Performance, there’s an option for Progressive Refine which will progressively show you the overall image at each sampling level. It can be slower to have to complete the entire image together but you can also stop it when you think the image is good enough. Its useful to eyeball it until you find out what number of samples you are happy with, then just use that number and uncheck progressive refine so it will be faster.

10. Resolution, Output File Location, and Output Quality

When you “double” the size of the image, you’re actually making it four times as large and your image will take 4 times the CPU/GPU to compute! When making a preview and not the final render, you can set resolution to 50%. But don’t forget to uncheck it when you are doing the final render!!! (ARGH!!!)

Make sure that you have set the correct Output file location. If you are opening the blend file for the first time on a new computer, MAKE SURE YOU HAVE RESET THIS. Blender does this thing where it doesn’t tell you the output folder/drive doesn’t exist – it will happily render but only tell you at the end of the process that it had no place to save the file.

Below that there’s also the compression/quality setting for it. For a preview you can set it lower but remember to set it back to 100% for the final render.

11. Selective Render / Animated Render Border

Whilst in Camera view (Shortcut Num-0) within 3D view, if you use the shortcut CTRL-B, you can demarcate the “Selective Render” border (which appears as a red dotted line) in camera view. To release this Selective Render Border, its CTRL-ALT-B.

Ray Mairlot’s extremely useful Animated Render Border allows you to selective render an object moving through a scene, or to even create an animated border that can be keyframed.

When using this add-on, the final output is a frame that will still be the size of the entire render resolution, but only the selective render area will have an image and the rest will be alpha transparent.

12. Use a Render Farm

Ok so after I shaved scores of minutes off the render time to 2 minutes per full resolution frame, I realised that 2 minutes x 6000 = 8.33333333 days – and that was 8.33333333 days that I certainly did not have! There are limits to what a great laptop with a good graphics card can do. When the computer is rendering – you can’t really use anything else that taxes the graphic card or processor when its rendering, so it basically disables the computer for the duration of the render.

So.. there was no other way around this – I had to use and pay for a render farm. I tried out Render Street, TurboRender and Foxrenderfarm as they were the farms which came up when you search for Blender Render Farms.

The basic process for all of them is:

– Pack external data into blend file (tick the option to automatically pack)
– Upload a zipped copy of the blend file
– Choose your render settings
– Pay the render farm some money (for slower render) or LOTS OF MONEY (for faster render)
– Monitor render progress and render quality through web interface
– Magically download completed rendered files in record time

[Do however note that the Animated Render Border add-on mentioned above in this list will not work with the render farms, but you can write to most of these render farms regarding desired plugins and they will let you know if the add-ons and plugins can or cannot be installed]

A more comprehensive review of render farms coming up in the next post…

365 Days of 3D – Blender

At the start of 2016, I decided I should brush up on my 3D skills by doing a 3D render a day for practice purposes (365 DAYS OF 3D!). All the 3D things I’ve made in the past were mainly designed in Openscad and then 3D printed/photographed/digitally painted the image by hand in Photoshop and Illustrator. In the past I’ve used anything from openscad, Sketchup, Blender, Modo, Solidworks and Meshmixer to produce and edit 3D models – but oddly without having had the proper finesse or experience of knowing how to light and render scenes properly (since I was producing models for the purpose of 3D fabrication/printing – rather than for for 3D rendering). For the last few years, in a weird roundabout way I found other ways to achieve an artificial/plastic 3D rendered look without doing actual 3D rendering thus far. So the next step seems to be: what about becoming more skilled at 3D lighting/rendering so I can maybe find new ways to experiment with realism?

I decided to focus on the free tools like Blender and Meshmixer to see what would come of it (now I have also gotten into Unity). But real life and work has intervened so I didn’t quite finish a model a day. However, a roundup for my Blender experiment for the first 50 or so days seems in order here…

See also: – the best of my generic renders
3653d wiki – annotated notes on my Blender process and shortcuts


It took me a week of daily use to properly internalise the foundations of Blender, after which it became a joy!… or to be very frank, almost an addiction!?? A kind of twisted, warped game where I had to make replicas of everything I saw in reality; I’d take notes on where the objects and light sources were located and how the light was bouncing off things; I walked down the streets at night and unrealistically declared to George “LOOK AT HOW MANY POINT LIGHTS THERE ARE IN THIS SCENE!”; and railed against how reality and the building standards in real life are pretty shitty – in the sense that things don’t snap onto a grid and the average space/house has so many messy joints. (Realism in 3D modelling/rendering is composed of carefully calculated mistakes…)

For the first few days of working intensively with Blender, I couldn’t understand why nothing happened when you did a left-click, and all the shortcuts I hoped to find seemed mapped to completely different things than I expected, which made it incredibly frustrating to use at first. The options are Mode dependent, meaning you must understand exactly which Mode the function you are looking for is under. (eg: you can edit and see a modifier such as boolean or subdivision surface in Edit Mode, but you cannot Apply it unless you go back into Object Mode. or if you are viewing in perspective instead of ortho, you cannot see the background image for reference). And you’ll also need a mouse for Blender to make life worth living. But when you have finally internalised all the shortcuts, Blender works like a charm and Cycles Render is surprisingly decent.

One of the best features about Blender as an open source tool is that it used to have a screencast key plugin which displayed which key you had pressed so it was ideal for making screencasts and teaching other people how to use it. Every complex application should have a function like this – it makes learning and teaching the application a lot easier! Everything I learnt about Blender came from watching people’s videos at 1.5x speed and looking at the little letters appearing in the corner of the screencast.

Sadly screencast keys was removed in 2.7 but I found a way to activate it again – and then I made my first Blender screencast!! (Complete with blandly aspirational royalty free background music!)

This had a lot of messy hidden geometry in this one but I don’t care since we’re not going to send a hairy bear to a 3D printer. And my main goal here was to capture the little finger indentations which are often left on the short velveteen fur of the bear.

George (as 3d layman) said that the visuals coming from Blender looked impressive to him considering the time taken to make them, but to be honest I haven’t become more impressed by Blender after using it – conversely I have become less impressed by the 3D modelling I have seen everywhere else, after I realised that clean geometric designs are sometimes face-slappingly simple to replicate. It wasn’t that the technology had become more awesome, it was simply that I needed to get to the point where I realised how easy it would have been to access the technology all along if I just put myself to it. I don’t know – why do I always have this idea that if it is “too easy” to learn, then it is not really true technical expertise? Or am i just being pedantic, or have I been insidiously conditioned into thinking that technical skill must be beyond me?

As an exercise, I tried to find images to replicate in 3D to test my ability to replicate a model (together with camera position and lighting) within a short time. I decided the reference images also had to be photos. Since these were meant to be quick exercises I chose simple images that I could render within a reasonable amount of time. But the point was that it was necessary to have some fixed goal in mind – to see whether or not I could achieve a specific look rather than settling for another effect when I actually wanted to attain another.

Here are the outcomes of a few of the other experiments:




My Blender Render


A delightfully simple lighting and metal shading exercise based on a the set design for the new PC music track featuring Chinese singer Chris Lee (李宇春) (track produced by A. G. Cook)


Ended up on some hyperallergic “Concise guide to the 2015 Miami Art Fair”, an event I have never cared for or desired to attend in the past or future, and did not know why I had ended up on this page after reading something art or design related – but it doesn’t even matter anyway. Nothing matters anyway.

The image was titled “Shannon Ebner at ICA Miami (via”. I have not googled it but I’ll assume it was the artist whose typographic work is being shown in that black room.


My Blender Render
Woo and yay, I’ve replicated this crushingly boring typographic exhibition space in Blender.


I saw this clean image and had just spent some time reading about Index of Refraction, which is how transparent materials such as glass and acrylic/plastic bend light. A good list can be found here IOR Values, and plastic tends to be about 1.46 to 1.5. So I thought I’d just see how long it would take to reproduce this image.

A further closeup was found of it, which I had reverse Google Image searched for and then traced back to a Catherine Lossing as the photographer.


My Half Hour Blender Render

And yes it turns out it took just half an hour to make this. From scratch. Almost depressing, isn’t it? This almost feels like taking the piss. I could easily improve the lighting reflection/refraction (which you can see in the original photo) as well if I spent more time on lighting but a rough proof-of-concept is sufficient for this exercise.


Reference taken from a random image I saw on Google Images whilst searching for something else entirely. I decided to just focus on the space and lighting, so I didn’t really care for reproducing the artwork beyond simple monogrammatic blocks stuck to the wall.

To be honest, I didn’t even know Theaster Gates was the name of the artist when I did this. Instead, whilst making this, I thought to myself, “hmm why does this weird “Theaster Gates” gallery look like the White Cube in Bermondsey?….” After which I googled it and yes it is White Cube… What can I say – I don’t follow the London art ting so closely and I was not in London in 2011 when the gallery first opened and this exhibition was on. Sorry if it sounds ignorant (but actually, I’m not even really sorry, so…)


My Blender Render


In next week’s update on 365 Days of 3D: Meshmixer and Non-manifold Nightmares…

OpenSCAD: transformations, minkowski sum, and lithophanes

This week I’ve been teaching myself how to use OpenSCAD, a free software which allows one to create 3D CAD models through programming. The syntax is really simple, logical, and surprisingly quick to experiment with – although if you would imagine there is an aesthetic trade-off between producing something super quick to model, render, and print – by default the simple primitives are of a low-poly quality, because this will suffice for most people producing functional household fixes. I find it interesting that by default the aesthetic of most objects printed from OpenSCAD will then automatically have the ‘computer aided design’ look.

To understand what I mean, here is an example of the same sphere with different overrides. There are actually special variables which can be bumped down to improve the quality = which control the number of facets used to generate all the arcs:

$fa – minimum angle for fragment (min value is 0.01)
$fs – minimum size of a fragment (min value is 0.01)
$fn – number of facets (this overrides the other two when present)


Two other functions for rounding off things are hull() (which effectively gloops things together) and minkowski() (which does a Minkowski sum of two point sets.

Hull produces a convex hull of the objects you put together. Think of it as filling all the concave valleys between objects with a big flat hard-edged squeegee and polyfill.

Minkowski sum is touted as an ‘elegant’ way of producing rounded corners, if you apparently overlap objects together such as a cylinder and a cube, but its more complex than just that. The Minkowski sum as I understand it can be used to produce the solid sweep of an object in motion. It can also be used to calculate the set of all the possible positions of an object moving within a space. So in terms of it being used in motion planning, if there is an object which needs to find its way around obstacles in a space, the possible space in which an object can move is the minkowski sum of the obstacles + the object itself at origin rotated 180 deg.

Screen Shot 2014-12-01 at 11.22.43 am Screen Shot 2014-12-01 at 11.22.54 am Screen Shot 2014-12-01 at 11.24.15 am

It is indeed a funny, fast way of getting a rounded edge but also potentially CPU intensive as I discovered I needed to drop the number of facets in order to have it process at a reasonable speed. If you use a sphere like in the 3rd example here and change $fn to 30 it may take over 2 hours to compile. So obviously for prototyping you would want to lower the number of facets so you can compile it faster along the way.

Basics transformations:
resize([x, y, z]) { … };
rotate([x, y, z]) { … };
translate([x, y, z]) { … }
mirror( [x, y, z] ) { … }
minkowski() { … }
hull() { … }


Using OpenSCAD to model functional household mods

OpenSCAD is perfect for producing simple household fixes in CAD. Here is an example of a Vileda mop head clothes pole adaptor which I made to hoist up clothing to a curtain rail. I used a vernier caliper to measure the pole and produced a few iterations to find the precise fit/size; the final print is meant to “snap” into place into the Vileda pole.

Screen Shot 2014-12-01 at 10.02.10 am


(1) i realised i shouldn’t have printed it on a raft, it does not need to be printed with any support/brim/raft really; (2) i made the prongs too small; (3) prongs are the right size now, but in an attempt to make it fit better, i reduced the bottom by 0.5mm but that was too much…


On Thingiverse: Vileda Mop Adaptor – Clothes pole
Functional household mod – ACHIEVEMENT UNLOCKED?



I wanted to understand what might be a good way to produce a lithophane or in OpenSCAD. There was a function “surface” which could use a heightmap (image in dat format). Not surprisingly this is already well-trodden territory but I didn’t want to just use the thingiverse customiser to produce it without understanding it first, so here is my understanding of how it can be produced in OpenSCAD (following the method used in iamwil’s embossanova library)

Screen Shot 2014-11-30 at 6.19.47 pm

  • Started with a fetching image of George which I made into a PNG
  • Install Xcode
  • Install homebrew
  • Install Imagemagick (in terminal > brew install imagemagick)
  • Use Imagemagick to convert your image to raw grayscale (eg: in terminal > convert george_gray.png -type Grayscale -negate -depth 8 gray:george_gray.raw)
  • Use ruby to convert your raw file into a dat file (i used iamwil’s raw2dat example)


# raw2dat.rb from
width = 300 # => width of resized raw image
str = File.binread('george_gray.raw')
pixels = str.unpack("C*")‘george_gray.dat’, ‘w’) do |f|
pixels.each_with_index do |pixel, idx|
if ((idx + 1) % width) == 0
f.write(” “)

Screen Shot 2014-12-01 at 9.09.25 am

Every pixel in this image is then turned into a number representing the pixel’s grayscale colour. So for an image of 100×100, there will be 100 numbers in each row, and 100 rows. It therefore forms a sort of terrain or heightmap that we can use in OpenSCAD.

scale([1, 1, 1/100]) 
surface(file = "george_gray.dat", center = true, invert = true, convexity = 5);

Screen Shot 2014-11-30 at 6.19.55 pm

Screen Shot 2014-11-30 at 6.20.12 pm

In summary… this seems like an interesting or easy way to generate 3d printed terrains. I imagine you could put in a terrain or topo map for some place and print a 3d terrain out of it. The only problem is that we don’t recognise terrain, so some of its visual impact is lost; whilst George’s face is recognisable from afar and interesting to use (human faces in particular), a naturalistic terrain map is not going to be recognisable or understood in the same way. I’d imagine the only fun part in printing a 3d terrain would be the punctum of confronting someone with a 3d printed model and screaming at them THIS IS THE SURFACE OF THE MOON, NOW IMAGINE YOURSELF AS AN INCONSEQUENTIALLY TINY 0.01MM DUST MITE LIVING ON THIS SLICE OF THE MOON!

PS: the above lithophane of George’s face obviously has too many facets and will take a bazillion boring years to render. So please posterise the faces of your loved ones before turning them into lithophanes.

Making 3d models of everyday things with 123D Catch

123D Catch is a cloud-based app/service that converts still digital photos into 3D models. Its available for iPhone/iPad and as a web app and PC desktop app. This was its description when it was still Project Photofly:

“Capturing the reality as-built for various purposes (renovation, rapid energy analysis, add-on design, historic preservation, game development, visual effects, fun, etc.) is possible using your standard point and shoot digital camera thanks to advanced computer vision technologies made available through Project Photofly.”

I am thinking of using it more for a project so I started off with a few experiments with two things on my table. You might recognize these as the characters from Carpetface… and well, they’re the things that live on my table…

The Excitable Dog
In case you are wondering, the “Excitable Dog” is a $2 water-gun squeeze toy from Daiso…
123D Catch requires you to take about 40 photos around the object, after which you tap on the thumbnail on the iPhone app and wait for a very long while as it processes. After it has uploaded to the cloud, you can tap it again and approve it for sharing with the community. Unfortunately, at this point, if the app is unable to process the images, it will show a big “X” on top and you will have to try again with a new set. For me, items which are shiny or translucent almost always fail…

This was the first successful item to be “caught”.

Screen Shot 2013-05-05 at 1.26.06 PM.png
The Horrifying Dog (View on 123d)

Screen Shot 2013-05-05 at 1.26.17 PM.png
Screen Shot 2013-05-05 at 1.26.31 PM.png

Screen Shot 2013-05-05 at 2.29.06 PM.png
The result is that I realised that the bottom and the back of the excitable dog was not evenly lit and thus did not show up at all. So I decided to use an ikea lamp with a flat top as a kind of makeshift lightbox.

And so this is what a broken piece of Aristotle’s head looks like.

Screen Shot 2013-05-05 at 4.16.42 PM.png
Aristotle’s Head (View on 123d)

Screen Shot 2013-05-05 at 4.14.15 PM.png

Screen Shot 2013-05-05 at 4.14.30 PM.png

Screen Shot 2013-05-05 at 4.14.42 PM.png

With mesh
Clearly a few improvements still need to be made with regards to how I take the pictures – I need to change the surface on which the items are placed because if both the item and the surface are white then there is no visual contrast. I think I will find a grid or something, and also adhere some random coloured dots on the objects which might help as markers. Another thing I want to build next is a camera and lighting rig.

Needless to say I’m really excited about 123D Catch even though it seems to be its infancy (the cloud-based service is only about a year old now) but I think it will be important because it works on consumer mobile devices and computers, and anyone can use it. As it becomes more and more accurate I can see people taking pictures of everyday things, and museums making use of this technology for educational and archival purposes.

Next steps: I intend to figure out how to build a camera and lighting rig, clean up the models and experiment with Meshmixer, but in the meantime, here are some more of their official tips on how to improve the image quality:

Team Fire’s Fun House

After watching Grand Designs, I decided to invest in a sweet little piece of virtual land. Since we can’t build it in real life, we’ll build it in Second Life. And I hadn’t realised how affordable Second Life land was in the rolling auctions – for about L2500 (approx USD10) I got a simple 512m sq plot (my land tier limit) on the side of a mountain by the seaside. Seemed like a good kickstart for practicing some building and scripting. I don’t know if thats a good price in the Second Life land auctions since I haven’t watched the numbers for long, but it seems reasonably affordable as the startup cost to embark on this game. A GAME WITH NO RULES…

Introducing the site of TEAM FIRE’s NEW FUN HOUSE:



Okay nothing has been built yet but we have grand designs for a skybox/sandbox of some sort… in the meantime, I rezzed up a giant flower that rotates. I mean, obviously, it can be really hard to find our precise plot on the mountainside if there isn’t some giant spinning psychedelic flower floating over it. And so we sat on top of the giant flower on the mountainside and admired the view…


Screen Shot 2013-04-04 at 3.42.07 AM.png

OH and we also met our hunky tattooed neighbour, who is probably going to start up a chain of sexy nevada-style casino brothel skyscrapers next to us. He probably thinks we’re a bunch of inexperienced tools, wiling away our days, sitting on a giant retarded flower next to his plot…


I’ve decided to seperate my SL from my RL interests, so for those interested in following our building progress and other tips/notes on Second Life Building, you can visit our other building blog at