I self-studied for 6 and a half hours to pass the Unity Certification (Unity Certified Associate: Game Developer)

I just did the Unity Certified Associate Game Developer exam and passed it. Aight, I know this is probably going to sound like a HUMBLEBRAG but, I am writing this post because I was originally pretty apprehensive about taking the exam. Although I’ve used Unity for several years, I wouldn’t describe my job role as a game developer. So I worried that what I had done before wasn’t good or “professional” enough. Before the exam, I also furiously googled for people’s accounts of how they studied for it, what exactly they studied, so in case there are others who come after me who are doing the same thing, I thought I’d add a description of my experience and what I did to pass!

I know that the Unity Certified Associate is considered the “entry-level” certification, but even if its the entry-level test, it still needs some studying! Besides my artistic endeavours (which this blog is mainly about) I actually work a full-time job (and not as a game developer), but Unity happens to be something that my work would like me to focus on a bit more. But it is actually a very intense few weeks at work – in addition to which I am also having to care for a very demanding toddler + working on several personal projects on weekends, so this meant I was very busy and only really able to eke out a little bit of time for studying.

Now, the title of this post is a little click-baity, but it is true. Recently it so happened that I re-installed Rescuetime, so I am able to definitively tell you exactly how long I spent preparing and “studying” for the exam from start to finish!! No more handwaving or vague estimations, I can actually tell you that in total, I spent exactly:

38.5 hours preparing for the Unity exam:


32 hrs (8 hrs x 4) : Attending a Unity Game Developer course
6.5 hrs : Self-studying

Breakdown of Course time

32 hrs: Attending a Game Developer course. I decided to sign up for and attend a course with fixed hours and a human instructor so it would formally “block” out time in my calendar to study Unity. I could have done it online on Udemy or Coursera or something like that, but attending it with real people also gave me the pressure that I had to finish the course, and I also could hear the sort of questions that new users ask (useful for someone who intends to teach it). Since it was formalised as a course I had to attend for 4 weekends, I asked my parents to help with childcare (thanks mom and dad!!!) during my course hours, so I could really dedicate the time to studying Unity and asking the instructor all the questions I ever had about Unity. I should add that at the point I took the course, I had several years of experience of casually using Unity already, so the material was generally very simple for me but I really appreciated having someone tell me what was the OFFICIAL way to do things. I’ve been anyhowly doing things for a long while (because I was self-taught but in a very disorganised way) and the instructor Siang Leng showed me many quick fixes for things I had been doing in very weird ways, so this was very enlightening to me.

For me, my intention when I take a course like this is to eventually reach the level that I could teach the subject, to be able to explain in detail the semantics of the user interface of the program, to understand everything about how the formats are encoded and used, and for me to I fully understand the processes from start to finish and do it in my own way, instead of just copying what the instructor is doing. I like to think that my capture rate (rate in which I absorb what instructor says) is very high, if not 100% at this point. Once I am shown how to do something, I will go and make sure I can do the task myself, and I will screenshot or even make a screen video of myself doing the task on my own after the instructor does a demo, and I immediately publish this to my own wiki (my ‘Second Brain’. In forcing myself to document everything this way (to be able to use my own demo to teach others), I am pretty sure that I… have more than accomplished the “lesson objectives”.

Breakdown of Self-Study time (6.5 hrs)

3.5 hr : Completing all the quizzes on the official Unity Courseware. When I did the course, I was given access to the official unity courseware on gmetrix. Now this courseware is the one for the “zombietoys” project that I vaguely remember trying out yeaaaaaaars ago when I first started using Unity. Not that I ever completed it. A lot of it seems very outdated, as it was done on Unity 5.something. But I decided to do the quizzes at the end of each section. If I got a section mostly wrong, then I went back to watch the video for it (at 2x speed, of course). I think I breezed through the first 10 chapters without getting quizzes wrong, then the second half was the stuff I clearly wasn’t so familiar with, things like Animation and Audio.

3 hrs :  Mock exams. I had access to a special 400+ mock exam question bank prepared by my course provider, like a kind of “ten year series”. To be honest, I didn’t have that much time, but at the barest minimum I decided that I would go through every single mock question once. I checked each question as I did it with the answer key. I did some of this whilst breastfeeding Beano with a split screen on my phone, however, I quickly realised that studying on my phone wasn’t the most ideal for certain sections because I really ought to have just done it with Unity open in front of me.

As I went along, I googled each section on the Unity Manual, googled any words I didn’t understand, opened up Unity and used the feature to build a test file. In Unity, I created every single possible asset once, created every 2d and 3d component once. I made handwritten notes as I went along, and later I also ‘revised’ from these notes by highlighting key words.

I asked my colleague (Unity guru!!!) for areas he thought I should revise and he mentioned a few areas that I realised I was less familiar with – Animation, Audio, etc. So I tried going through the motions of creating the animator, setting up some audio mixer groups, trying out every single type of light with all the different shadow settings, making different particles, etc.

I am glad to say that the outcome was, better than random! 644/700 means that I should have scored about 92/100, so yeah, I’m happy with that score. It was on the whole easier than I had expected, but I might have been lucky with the draw of the questions. I recognised several of the questions and topics from the official courseware / mock tests. The time (90min) was more than enough, I sped through it once and finished it within 40 minutes, marking all the questions I was not sure about for review and then I used the remainder of the time to check the ‘mark for review’ question set twice through, unflagging them as I made up my mind about the answers. Then after having checked it as best I could, I decided I would submit it (20 min early). (I was very glad to have done it on computer instead of at a test centre which would probably have given me a lot of nerves).

So what does this mean? It means that it is true that the exam is more about your experience and familiarity with the software and scripting. If you are a casual Unity user of several years, it is possible to pass the Associate exam (not professional exam) with basically what is just around 2-3 evenings worth of extra studying (6.5hrs) on top of completing a basic game dev refresher course (32 hrs).

I hope this helps someone else out there trying to decide if they should take the Unity exam and how to study for the Unity exam!

Many thanks to the Dingparents whose help made it possible for me to study for and ace the exam!!

My First Vinyl Cutting Project

I’ve always liked vinyl as a material since the process of labelling and thinking about the text has always felt like a meaningful part of my work. Sometime back I also enjoyed working with cutting acetate-type sheet material, but cutting it by hand was quite a schlep. Whilst mindlessly browsing a certain (ahem) short-form mobile video sharing social media platform, I kept seeing lots of “behind the scenes” shots of people using cutting machines to creating stickers and vinyls as part of their “quarantine etsy home business”. Some of them showed sophisticated uses of the machines to do precision things like layering vinyls, foil embossing, heat transfer film, debossing, etc. ie the things that mainly is done at a commercial print shop, even if we’ve had the technology for ages and ages and it is pretty simple. (The less impressive ones were just repetitions of the same type of etsy product copied from one another, and some pretty basic things which made me say “HOW IS THAT EVEN A BUSINESS? People pay money for this???”)

One of the electronic cutting machines I kept seeing was the Cricut and Silhouette, the latter of which I had used once in NYP’s Makerspace, somewhat fruitlessly (because the grip maps were not maintained well in the shared working space). Somehow, I had not really thought about a home vinyl cutter before.

This class of electronic cutting machine can cut vinyl, paper, cardboard, plastic, stickers, cloth, thin sheets of wood, basically practically any sheet material with perfect accuracy. You can also insert a pen into the slot and it will draw for you, but it is so perfect you may as well have used a printer. There are no errors. I couldn’t possibly draw as perfectly as this machine, unlike my experience using more shonky plotters. In fact, when considering what a precision device this is, this makes the line-us look like a toy.

To be honest, the downside is that the machines are not open-source, and now I’ve also read that Provo Craft has been aggressive in pursuing legal action against software makers who have tried to reverse engineer their software in order to make the machines cut their files directly (bypassing the default cricut design space). So the machines have their own ‘ecosystem’ catering to communities of users who are largely home-crafters and small businesses. The cutting files have to be uploaded via their proprietry software (it accepts png/gif/jpg/svg) and sent to the printer via their software. Up to about the 2010s it appears that it ran on a cartridge system and everyone had to buy these pre-set cartridges which wouldn’t have been interesting to me at all.

Probably the weirdest part is that it seems to have created a niche of users who are not skilled or tech savvy enough to design the files, all searching for cutting files and ultimately willing to pay shocking amounts for files that they can cut with. (Cue more of the “HOW IS THAT EVEN A BUSINESS? People pay money for this???”)

Knowing all this backstory to the way it is run, why would I still get a machine like this? Well… although there are alternatives like the KNK Force/Maxx/Zing, Skycut, GCC, Saga, Vicsign, Teneth, Liyu, Boyi, etc (so many), many of these are pricier, all have their own software to deal with, not all are as well documented, and I may not have the time to calibrate the blade settings one by one for each material… so… eh. The most important thing is that I can just send an svg file over and get it sliced, like how I might do with the laser cutter. That’s all I really need. So for me, going with the big name machine means that it works out of the box.

Since Illustrator is kinda my thing, I just did up a quick idea for a metal style name text for Beano’s toy piano in it. Some people prefer to use things like inkscape, Sure cuts a lot (SCAL) or Making the cut (MTC – which appears to be abandonware now) to produce the svg files (I also know that SCAL and MTC were the two software makers who were forced to make their software non-compatible with cricut). I think this points to it being a casual crafter user base, not an art/designer user base, where I would have thought that Adobe Illustrator would be considered the industry norm for software used to generate SVG files. Anyway, could also imagine coding up the svg markup too to get the files, maybe through Processing again.

The shapes in the SVG have to be “welded” together in Cricut Design Space or else it will try to optimise the space and rearrange your cut items all around.

I bought the cutting machine online, but I did make a trip to Plaza Singapura’s Spotlight where Cricut has a big area in the front of the entrance, along with its vinyls. I took one look at the price of the vinyl there and basically made an about turn – they were in the 17-45 range (*spits out my tea*).

For supplies, I got rolls of Oracal 631 Matte for cheaps online. After doing a bit of research, it appears that a standard vinyl used by vinyl shops is the Oracal 631 (removable) or Oracal 651 (permanent). The adhesive on 631 vinyl is a clear, water-based removable adhesive while 651 is a clear, solvent-based permanent adhesive. Maybe I will get 651 for future projects but at the moment I just got rolls of the removable 631 vinyl which I could then use for household projects as well as screenprinting…

At Spotlight the Cricut brand removable vinyl was SGD17 for 4 feet of Black Removable Vinyl (SGD 4.25 per foot). But online I got SGD44.70 for 30 feet of Oracal 631 Vinyl (SGD1.49 per foot for the Oracal 631 Black Vinyl). I also got Transfer Tape in a roll online at 24 for 50 feet (SGD 0.48 per foot). For the “default” vinyl, I was a bit wary about getting random chinese brand vinyl just because I wouldn’t know what kind of adhesive it would be using, although I guess if I want to experiment more with materials I will need to order more samples from different producers esp when it comes to the weird and wonderful world of HOLOGRAPHIC VINYL.

I got some tools from the Nicapa brand which was a lot cheaper than the cricut brand tools. I think you really do need the tools to do the “weeding” or removal of excess vinyl. Although, I could have packed in more items in the vinyl sheet, but this was my first try I didn’t want to be THAT adventurous. It seems inevitable that there will be a bit of ‘wastage’ along the way.

I sometimes try to imagine what a printmaking class would comprise of (Having never studied printmaking or art or design when I was younger (in a formal way). If printmaking mainly was about the psychomotor skill (and not about having to study the history of printmaking or the cultural aspects of the medium), then in the future, would anyone really need to study printmaking or could they also quite possibly totally DIY it with a precision cutting machine like this? A print would be made by simplifying an image into the main regions and colours, and then vinyl cutting those specific areas in the right coloured vinyl that one could obtain. With a physical vinyl, more vibrant or unusual colours beyond the digital printing colours could be obtained – like spot colours, pearlescence, reflective mirror finishes, or holographic effects…

Actual time spent making the digital file and writing this post was several times more than the time spent on actually executing this project physically. Took me probably a maximum of 15 min from cutting the vinyl, loading it up on the lightgrip mat, cutting the vinyl, weeding, laying over the transfer tape, cleaning the target surface, and transferring the vinyl to the surface. So yeah… precision and speed was achieved.

The final product!

Rochor Dream: HSL/HSV colour values and making an iridescent/rainbow shader in Blender

Recently I’ve enjoyed just playing around in Blender with colour. There are three ways of declaring colors in blender – hex, rgb and hsv – very similar to CSS/HTML where you declare colour in hex, rgb and hsl. In a way its easier to use HSV or HSL because the model is based on the colours themselves but from each colour you can also change saturation and lightness, so it is a lot easier to pick and compare based on how close a colour is to another.

You can’t possibly do that from just eyeballing the RGB values or worse still the hex code for a colour (hex being the more compact form and thus less human readable way of declaring RGB values). So in a way HSL/HSV is a bit more designer friendly. Most young designers don’t really delve that far into digital colour or colour spaces, and it seems more to be a thing that would concern programmers and developers more, but I wanted to get an iridescent colour-shifting hue, so the only way to get it is to look a bit closer at the way in which colour is picked.

Iridescence is where the surface changes colour as the angle of view or angle of illumination changes.
HSL stands for hue, saturation, lightness, while HSV stands for hue, saturation, value. (Personally speaking, the HSV and HSL colour spaces look pretty much the same to me…)


There’s some image compression on this image, so the colour wheel is a bit wack, but you can look at where the selector dot moves as I tweak the H, S, and V values….

In Blender there’s the handy color ramp which is meant to map the values of your colour into a gradient. You just define the colours at the ends and then get Blender to calculate the gradient between the two (or more) colours. Now what I found was that you can ask it to map the colours around the wheel either clockwise, counter-clockwise, and also either via the nearest route (when you want complementary colours) or the furthest route around the entire wheel, thus achieving that distinctive “iridescent” look which is very similar to what humans are able to perceive.

When you change the H(ue) value, your selector goes around in a circle. When you change the S(aturation) value, the selector goes from the centre outwards or inwards. As for V(alue), it goes from light to black. Compare this with the rather confusing mixture of red green and blue to that goes into any colour under the rgb color space.

To be honest, I’m not 100% sure this is the final colour I am going for, it still feels like an experiment. Anyway, I’m going to try to make a couple more things like this in the coming month; hopefully I can make a few interesting looking prints on the metallic pearl paper…

Excerpt from Rochor Dream:
Coming soon on Plural Magazine’s 100 Artists!

New motherhood is like a trip to a foreign country: Flatlands

Here’s a recent visual experiment that I made in the stolen moments of Beano’s naps. The setting is the 3-room rental flat we used to stay in, a very mundane 3-room “New Generation” (slab block) default template HDB flat built back in the 70s and 80s. And I think I’ve finally found a way to explain this thing that I’ve tried to explain many times before (but struggle to explain, similar to how its hard to explain my experience of taste-shape and mirror-touch synthesthesia).

For me, at any one time I always feel other superimpositions or juxtapositions of other places that feel a bit like memory palaces where I can store facts, thoughts, and memories of another time. Its hard to explain, but it is like when you have a work phone call, you might start doodling nonsense on a piece of paper. But in my case, when I start to daydream or let the mind wander (also: this happens when I am extremely focused on an urgent task and everything else zones out), I always end up recalling a visual memory of a place I’ve visited in the past. I am imagining tracing out its contours, I am imagining what the details must be like, what the lighting must be like. Honestly, I can’t really explain why certain views for me just keep popping up as the ‘memory palace’, as some of the locations are pretty inconsequential and emotionally insignificant to me. Yet! My mind returns to them for further rumination. To what end? I do not know.

I began writing the following some time back when Beano was a much smaller baby. But now that we are all locked down at home for the corona, and I haven’t left the house and its vicinity in days, fleeting memories of parks I’ve walked in come to mind. I found myself scrubbing through these albums trying to find the name of a particular memory that may as well be a dream. There was something oddly compelling about these images I had taken of my walks and frustratingly I COULD NOT FIND THAT ONE IMAGE OF THAT ONE WALK IN MY MIND. And turns out some of these images are pretty weird. Why are there no people in them?

It was always in the back of my mind to do something with this huge lot of photographs, so…. now they have ended up in this visual experiment. I actually think it looks better than I expected it; so I think I might even make more of them soon…


New motherhood is like a trip to a foreign country. Firstly, the middle of the night feedings are conducted in near-darkness, with the endless droning of the white noise machine in the background, and some random show on Netflix playing to sustain your consciousness beyond all normal hours lest you fall asleep on the sofa and baby accidentally rolls off; not unlike when one takes a plane and night-time is arbitrarily enforced upon you, the sound of the engines whirring is ubiquitous, and all you’ve got to watch are some random blockbusters or episodes of Big Bang Theory on the inflight.

When Beano was very very small, I found myself trying to claw back a sense of mobility through a series of ever increasingly longer walks with Beano strapped to me. In some ways, this strategy reminds of me of the Capital Ring walk I did in 2017. Living in Greater London makes one feel crushed by one’s own insignificance in a big city that is too vast to know by foot, so I thought I’d try to complete a ring around the city.

Once upon a time I was going to do a detailed expository blog post for each leg but AINT NOBODY GOT TIME FOR THAT so here are quite simply the photo albums for each leg of the walk…

Debbie’s 2017 Capital Ring Walk!

The source material for “Flatlands”

“I decided to walk the supposedly 78 mile Capital Ring over 6 consecutive days. I say “supposedly”, for Debbie does not go “as the crow flies” but rather haphazardly in a squiggly line all over the map, and according to other mapping devices it seems I may have walked more than 150 miles in total. Rather than starting with the traditional route as listed in TFL’s maps and David Sharp’s guide book to the Capital Ring, I decided to start and end my journey at Stoke Newington’s Rochester Castle.”

14 March 2017: CAPITAL RING Stoke Newington to Woolwich

Day 1: Stoke Newington to Hackney Wick
Day 1: Hackney Wick to Beckton District Park
Day 1: Beckton District Park to Woolwich Foot Tunnel

15 March: CAPITAL RING

Day 2: Woolwich Foot Tunnel to Falconwood
Day 2: Falconwood to Grove Park

16 March 2017: CAPITAL RING

Day 3: Grove Park to Crystal Palace
Day 3: Crystal Palace to Streatham Common

17 March 2017: CAPITAL RING

Day 4: Streatham Common to Wimbledon Park
Day 4: Wimbledon Park to Richmond

18 March 2017: Capital Ring

Day 5: Richmond to Osterley Lock
Day 5: Osterley Lock to Greenford
Day 5: Greenford to South Kenton

19 March 2017: CAPITAL RING

Day 6: South Kenton to Hendon Park
Day 6: Hendon Park to Highgate
Day 6: Highgate to Stoke Newington

A Glorious Bale of Virtual Hay: Second Life worlds and their visual references

My Second Life Avatar is now approaching its teens! Monster Eel is 13!?… (and Monster wasn’t even my first character). Every few years when I return to Second Life I’m delighted to find that it has its own life, going on strong. Things are even more detailed now. Who is doing all this? Who is paying for people to do this? Is it all just a passion project for people? Why does this unnecessarily detailed digital bale of hay exist? There’s a whole cottage industry of people making exquisite virtual hairpieces and billowing blouses and freckled skin and distressed furniture and plants and antiques and futuristic gizmos for sale (sometimes dispensed via some unnecessarily complicated gacha machines)!

Over the weekend Beano decided to have a long nap whilst strapped to me (WOW!!!!) so Mummy went on to Second Life to have an adventure without leaving home… and also to look at the types of interactions in these ‘installations’. If we think about the references that each of these worlds draw upon, I realised that the places I visited could be divided into 6 different categories….

1. Depicts an abstract world
Betty Tureaud’s Rooms
https://secondlife.com/destination/rooms-by-betty-tureaud

2. Replicates real world and has specific references
Paris for Ara
http://maps.secondlife.com/secondlife/Simpson%20Bay/114/79/27

3. Replicates real world but has no specific reference
Breath of Nature (Serena Falls)
http://maps.secondlife.com/secondlife/Serena%20Falls/28/82/22

4. Depicts a fictional world and with specific references to fictional works
Kintsugi
http://maps.secondlife.com/secondlife/Runaway/71/123/23

5. Depicts a fictional world with some realistic elements set in the past
Puddlechurch Rye
http://maps.secondlife.com/secondlife/Puddlechurch%20Rye/128/182/44

6. Depicts a fictional world with some realistic elements set in the future
Planet Vanargand Outpost Fenrir & Solveig Village
http://maps.secondlife.com/secondlife/Amazing%20Island/148/169/242

[Admittedly, I have been writing a lot of LESSON OBJECTIVES lately and this might be seeping into the above…]

The categories are not black and white, they blur into one another. Perhaps there are unknown references behind them all that I am not aware of. To what extent are these novel creations, or are they actually faithful copies of weirdly specific things in some specific world of the creators? I… really don’t know. Will some of these mysterious anonymous SL creators ever reveal a bit more about their own design process…? Is it recorded somewhere in the world via the odd blogger webpage or flickr group, posted online under pseudonyms that I can find?


1. Depicts an abstract world
Betty Tureaud’s Rooms
https://secondlife.com/destination/rooms-by-betty-tureaud

This is like looking into a early 2000s book on Creative Coding, or Intro to Processing, or looking at a folder of three.js’s webgl experiments. Experiments and snippets, I say, because these abstract rooms are more like raw snippets than actual stories or narratives or worlds to explore.

The iridescent rooms look empty but when you walk into the middle of the rooms (probably triggered by your avatar walking onto the slightly raised surface), this triggers different interactive animations. This reminds me of the SL in the days of yore, when interaction and realism were even more limited, so all you could write a LSL script to rezz up were a bunch of basic geometric forms that were randomly coloured whenever you entered a space, and for interaction you could move these about randomly (although to what end, this would be unclear). In fact, this is EXACTLY what happens in some of the rooms.

Whilst I love these rooms because they definitely look nothing like real life (and it seem to me that Betty Tureaud’s works over the years have been focused on creating abstract worlds that don’t exist in real life, peppered with statues of human forms), I still think that the interactions for these have come a bit as an afterthought, or isn’t as naturalistic or intuitive as it could be (based on current available technology in SL). Its just like how we don’t use marquee or iframe or mouseover or flash anymore and javascript mouseovers and css transforms don’t really impress anyone anymore. (It doesn’t mean that I don’t enjoy walking through the rooms though!)

2. Replicates real world and has specific references
Paris for Ara
http://maps.secondlife.com/secondlife/Simpson%20Bay/114/79/27

Paris for Ara is a location in Simpson Bay labeled under photogenic spots, and boy is it photogenic. I’m betting that many a SL fashion shoot has been done here. Although it is supposed to be Paris, it looks a bit more like Carnaby Street in London than Paris per se with all the English signage mixed in, and with the prominent rainbow pride flags everywhere (yay!), parts of it also feel more like Soho. The vision for this is ostensibly to render a real world scene into Second Life.

Some of the details are crazy amazing even when you zoom in, like for example, these steaming hot beignets (french donut fritters) I found on a cafe table. I’m impressed!

A photogenic spot like this is probably quite universally understood and enjoyed by all, since it has a real world reference (even if its been fudged a bit by mixing elements from different countries, but you know, ‘generic european city with street-side cafes and pubs’), and some of the buildings are even faithfully rendered in their interiors, so I would imagine these to be spots designed to be rented out to residents or for retail purposes. I walked into what I think was a cream cake shop and there were 3 floors of empty rooms above, overlooking the street. There was even a torch by the stair, because you might have that in the stairway of a real stairway in reality, but I didn’t use it because I had set the environment to SUNRISE.

3. Replicates real world but has no specific reference
Breath of Nature (Serena Falls)
http://maps.secondlife.com/secondlife/Serena%20Falls/28/82/22

Next I visited another photogenic spot, Breath of Nature in Serena Falls. A beautiful flower meadow with pastoral elements rendered in loving detail – an endless sea of soft dandelions, a white horse, a windmill, an old farmhouse, some sheep, a rustic wagon… I know, people dig this shit. Can’t go outside into nature? Well here’s nature for you in Second Life. Oh and with some generic amercian top 40 alt rock country pop internet streaming radio channel playing by default in this SIM… as always. I’ve always wondered if this is the soundtrack by which the creators of these objects live by. Once in a while a SIM has good radio tastes, but most of the time, its just this not-very-interesting generic internet radio streaming through wherever I go, punctuated by the sound of my avatar thudding against things by mistake (THUNK THUNK THUNK THUNK).

There are some gems here though. A bale of hay with an ingenious way of seeming real. I know, these tropes of construction must have been devised years ago, and I admit I have never been deeply involved in building things in SL (and more of a tourist in SL), but there are some cool tricks to be found here. Its not hair particles which gives our hay bale its realistic appearance, it is a few strategically placed strands which do the trick.

I’m all like, who decided to build this in such detail? How many hours did it take? For them to construct the chicken coop with its wires, its distressed wood texture, to decide on its form. Is it a person with a chicken coop just like this? Did they HAVE to design a chicken coop first or did they use a reference from somewhere? I mean, this is not even a normal chicken coop. Its a set of shabby chic drawers converted into chicken coop. With a pile of rustic bricks by its side.

Finally, this bucket of ducklings with a duck about to jump into the water with mother duck looking on. This item even chirps. Yes, the ducklings, they are chirping. The water is cleverly done with just a partially transparent alpha layer on top with a translucent white pattern that makes it look like a reflection on water (not a true reflection of anything, but it doesn’t have to be in order to look real enough from a distance!)

4. Depicts a fictional world and with specific references to fictional works
Kintsugi
http://maps.secondlife.com/secondlife/Runaway/71/123/23

This parcel is named Kintsugi (the japanese term for repairing cracked pottery with gold) but really it is a tribute to Studio Ghibli’s Spirited Away, which I will confess that I can no longer remember the story line for. It is supposedly based on the fictional world in the anime, and this plot relies a lot on notecards and the chat system to distribute information about the world to the user. Personally, I am not so much a fan of notecards, even though I like words – because all these notecards fall into my inventory and become a big mess over time.

A magical house on an island….

A series of red torii shrine gates… because why not, if you already have made one beautiful torii gate?

The water isn’t really Second Life water, but some other object which has these obviously faked water ripples on them which look realistic from a distance but then when close up, start to look very artificial. You can walk on the water, which I think is the point of this magical world (in most of SL, you can walk into the water and ocean and even have a rather long walk into the ocean although it might be quite boring).

The mist and atmosphere is nice, but once again, like with any role play environment, the reverie of being in a mystical forest is sometimes punctuated by other SL residents walking by. Yeah one thing I don’t get is why there are so many SL residents dressed as ladies with big bosoms and big hair and big butt in a tight dress…

5. Depicts a fictional world with some realistic elements set in the past
Puddlechurch Rye
http://maps.secondlife.com/secondlife/Puddlechurch%20Rye/128/182/44

Another photogenic spot, Puddlechurch Rye is an event space which is reminiscent of a warehouse space, dressed up as a 1920s parisan speakeasy cigar lounge with plush carpets, stacks of antique books, delicate chandeliers, a stage for performances, and a gallery space. Reminds me a bit of when I visited the Museum of Everything in Paris (a travelling museum for artwork by outsider artists).

How much of a world like this is actually created entirely from scratch by one person (or a small team of people)? How many man hours goes into designing a world like this? Or, is this in part a very clever curation of well chosen objects from different creators to paint for us this speakeasy ambience?

What’s interesting is the detail to which the exhibition has been set with draperies, with conventional framing and unconventional framing. Can’t do a real world exhibition? Well this is pretty close, although the artwork is also the world which has been rendered for us in such detail.

An exhibition space for flat 2D artwork, shown in several different ways…

Conventionally framed artworks…

Along with some unconventional framing…

And finally, some moving louvres to display 2D artwork. Not entirely interactive, but some ideas here on different ways to present a work in a virtual space…

6. Depicts a fictional world with some realistic elements set in the future
Planet Vanargand Outpost Fenrir & Solveig Village
http://maps.secondlife.com/secondlife/Amazing%20Island/148/169/242

The thumbnail for this outpost on the SL destinations board was a huge “alien” mountain. But really, mountains are just boring old mountains like the ones on earth unless you say… ITS A SPACE BASE FROM THE FUTURE and here’s a space outpost to go with it! I landed in this space outpost floating in the sky (no biggie, not a hard thing to build) and immediately was overrun by other residents rezzing on top of me, skimpily dressed ladies dressed in tight dresses and high heels running around over small old me. Yeah so much for the scifi vibes…

I enjoyed walking around this space base until I went through a door which said “NO ENTRY” which I assumed was written specifically to entice me to enter anyway. A few metres further down they must have not finished building the space station because I hilariously walked into a big hole in the floor, immediately falling about 3000 metres down back to ground, landing noisily on a giant geodesic dome…

Finally I found myself in an empty carpark in this alien world admiring the detail of the snowflakes blowing past me. No detail has been spared! The snowflakes are not just circles, they are images of SNOWFLAKES.

At this point Beano woke up so I had to terminate my adventures in SL…


Why haven’t I made an ‘art’ project on Second Life before?

Last year Linden Endowment for the Arts closed. For many years now I have always wondered if I should apply for the land grants in the past, but I never got around to it because Second Life was something I enjoyed as a game, exploring without a specific goal. It simply wasn’t high on my priority, since it requires quite an investment of time to build this all, and I’ve got a lot of real world projects to finish. Second Life was leisure and enjoyment for me, not work, the same way one might enjoy a pleasant walk through nature without the desire to reshape it all. I suppose if you were just dabbling and not too sure on whether you would commit to building such a project, it might have been useful to give you a nudge to go and do it without any financial startup cost. Land tiers aren’t cheap after all. And if this is not art per se, then, is this all a ‘vanity’ project?…

However, the closing of LEA is not as much a loss as one might expect. I suppose if I am really motivated to create art in SL, I would continue to make it regardless of whether I had a land grant or not, and even with the closing of LEA, there continues to be lots of art on SL. To be honest I never really got into the community for SL artists. Besides a run in with some people in Singapore building an amazing Sikh temple several years ago (what happened to it I wonder?) I don’t know what happened to other SL makers in Singapore…. Or maybe if you are out there, give me a holla…?

Domestic Life in the time of Coronavirus: Sprouting Seeds, Mason Jars & Food Prep, and Not Exactly Bullet Journalling & Productivity

This blog has been a little quiet since the circuit breaker in Singapore began I’m a person with too many jobs at the moment. I’ve been (full-time) teaching all my classes (say hello to 3-4 hour practicals via Zoom!?!) and taking a part-time Specialist Diploma (just because it is circuit breaker hasn’t meant the essay deadlines were delayed!), whilst also full-time taking care of the baby human Bean (childcare centres all closed and grandparents advised not to travel over for childcare!!), which has left me with nearly no time to do any of the normal debbiethings I would usually get up to.

Maybe to build a little momentum and to get the ball rolling on this dusty old blog again, here’s a little documentation about some of domestic/productivity-related debbiethings I DID do during circuit breaker in the stolen moments….

1. Sprouting Seeds: Growing Mung Bean Sprouts at home

It seems everyone’s newest urban growing craze during Singapore’s lockdown circuit breaker is Mung Bean Sprouts and yes… even I too have been growing them. I’ve grown some sub-par sprouts or weird looking sprouts in the past – we forget how used we are to seeing the commercial “taugeh” sprouts being all pasty white and yellow, and somehow by allowing the sprouts to turn green by giving them some sunlight also changes how they grow and how they taste. Growing some fast sprouts for consumption is different from growing bean plants, and websites online all anedoctally point to a few things you can do to improve the quality of your sprouts:

  1. Grow them in total darkness (Under a truly opaque cloth. A hankerchief will not suffice to keep the light out. I used an dark coloured pillowcase folded over twice and draped it over the beans.)
  2. Change their water at least twice a day.
  3. Avoid disturbing the beans too much (Somehow they grow better when they get to really establish their roots)

Now its not absolutely necessary, but I also got this microgreen tray which has these micropores which is supposed to enable a more even distribution of the microgreen seeds (although the mung bean itself is bigger), and which has two half trays which makes it easier to remove and change the water, and allows for planting two different seeds at the same time with different sowing time. (I’m just waiting to get some more microgreen seeds from local farms to see if there’s a microgreen that we will enjoy eating, so later in the year I’ll report back on the microgreens…)

Presoak the beans overnight in a bowl, covered by a cloth. Here I measured out 1/4 cup of mung beans. On hindsight… I probably needed half of that. These aren’t any fancy mung beans, just the cheap Redmart brand for everyday cooking.

After soaking overnight and skimming off the obvious split beans, the remaining beans were scattered over the tray and water poured in until it touched the mat. 1/4 cup of soaked mung beans fit almost exactly into the two trays.


The beans then were rinsed twice a day and left to grow under cover of darkness until they looked about ready to harvest on Day 5.

Here the human Bean inspects the Beans.

The roots are clean so we ate them roots and all. I only rinsed it several times in order to remove the green bean husks which are a little less palatable, texturally, but not entirely inedible.

Finally the cleaned sprouts are ready to go in any dish you want. This made enough for about 3-4 meals of sprouts, so next time I’ll grow fewer beans at one go as its nicer to eat the sprouts fresh.

The sprouts were blanched in boiling water for 1 min and then thrown into a big metal bowl of ice water to stop them from overcooking. Then they can be used in any recipe. I loosely used Maangchi’s Sukjunamul Muchim recipe to make a sprout dish to go with a big pot of Doenjang jjigae, which was also loosely thrown together with bits and bobs around the house.


2. Mason Jars: How I make Overnight oats and prep common ingredients ahead of time

I became slightly obsessed with mason jars after trying to find a replacement lid for a regular jam jar that I had around the house and so I wondered about what constituted a standard jam jar lid size. I measured the exterior dimensions of the jar I had and it was 70mm – turns out that this is the size of a “regular” mason jar. And there’s another common size that I find myself drawn to even more – the wide mouth. Looks like a drinking glass, but is microwaveable and oven safe? SIGN ME UP! Making overnight oats was the solution to my morning routine; I find that I can no longer skip breakfast without becoming faint and HANGRY, but often I don’t have enough time to prepare food for myself when I have to run a 8am or 9am class AND also feed the Bean AND change her nappies AND check work email. So… Jars! JARS! JARS! George seems to think I’ve reached new instagrammable heights of food-prep-hipsterdom with my functional food prep so here are some pictures that he ended up making me take.

These are 1-pint jars (476ml) and just the right size for a portion of food (they are also the right size to pour a nice cold drink into!). I got 12 jars online for about S$48 and I also got a stack of both regular and wide mouth plastic lids for about S$5. The Ball jars themselves are definitely oven safe and microwave safe as they were meant for preserving jams, so if you buy random jars online check to make sure they are suitable for such reheating use. (If you dig a little deeper online you’ll find a whole lot of alternative mason jar lids which work as fermenters, sprouters, graters, juicers, etc…)

The overnight oats recipe that I made up to my preference and have been using for some time now is this:

Debbie’s Breakfast

1/2 cup oats
1 tbsp chia seeds
1 tbsp flaxseed meal
1 tsp moringa leaf powder
1 tbsp dried cranberries
1 tbsp dried mulberries
2 dried apricots, cut into small pieces
3/4 cup milk a squirt of honey

When I am eating it, I throw in about a 1/4 cup more of milk and sometimes I throw in some frozen mango pieces, or frozen berries at the last minute, but I try not to leave the fruit in for too long (ie: i don’t add it in at prep time) because they can get a little weird and funky in there, like how fruit tastes when it has been allowed to sit in a wet plate for too long. Its like a dessert, and I didn’t think I’d be eating this so often since I have a savoury tooth and not a sweet tooth at all (I have eaten savoury breakfasts for most of my life), but I was hoping that oats would aid my milk production (since the Bean is still breastfed) and turns out that overnight oats SAVES TIME!


I also use the jars for advance meal prep at the moment. I like to make a big batch of caramelised onions at the start of the work week (2 jars worth, or a 2kg bag) and then stuff them into the fridge so that during the week whenever I make a quick meal or pasta I can just throw a handful of onions in and it immediately makes it feel even more like a meal.

Debbie’s 15min Lunch



Ingredients
80g of vermicelli pasta
some bacon
4 cloves garlic
handful of baby spinach
smoked paprika
caramelised onions
caramelised red peppers or any other cooked vegetable in the house
and some leftover chilli flakes from when George last bought PIZZA

Steps

  1. Boil of a pot of water with 1 heaped teaspoon of salt
  2. Fry the bacon in some oil at very low heat to render the fat
  3. Slice the garlic thinly and add to the oil. Heat should be so low such that only small bubbles appear on the edge of the sliced garlic.
  4. Add paprika and chilli flakes to the oil.
  5. Cook the pasta according to the timing on packet, in my case it is 6 minutes. Set the timer for 5 min.
  6. Add in the onions and any vegetables to the oil. Wilt the spinach in the pan.
  7. A minute before the pasta is done, transfer the pasta and a big splash of pasta water into the pan.
  8. Allow for everything to cook down until the pasta water and everything is absorbed back into the pasta (usually 1-2 min more)

Lunch in 15 minutes!

 


3. Not Exactly Bullet Journalling: Improving my To Do List format

Longtime readers of this blog (who on earth is my audience? haha. hello friends???) will know that I am not so secretly big on GTD/PRODUCTIVITY. Sometimes George thinks I like doing work because it must be that I ascribe some kind of moral value to hardworking (a la protestant work ethic) but honestly I like working because… I enjoy it! I enjoy keeping busy and fiddling with things and doing stuff. I enjoy toiling away at things. (Oh. Maybe that is where Beano is getting her inexplicable drive to EXERCISE NONSTOP).


During my maternity leave I had a phase in which I read all about bullet journaling. I also became aware that there’s a huge cottage industy of people and instagrammers banging on about their #bujo designs although none of them look particularly productive to me, and if its not productive I don’t really need it. My notebook is like a cup I can empty my brain out into so I don’t have to hold all that stuff inside my brain where it gets all crowded. I don’t really need my notebook to be neat, but I liked being able to physically cross off items on a list and review what I managed to complete at the end of the day (a sort of pat-yourself-on-the-back if you managed to do most of what you planned. Previously, I would write items in a list and then cross them out, which made them quite unreadable. I ain’t got time to document everything in a bullet journal, but I have incorporated the format of the checkbox into my everyday To-do list format. I now draw a square and cross out only within the square when the task is done. I also draw an arrow to indicate if the task is carried over to the next page.
Whether or not you believe in willpower being a finite or infinite resource, I do find that removing obstacles to my morning also helps get things going every morning (especially when I have to rush to feed baby, myself, and start my 8/9am class):

  • Getting hydrated in the morning – Before I go to bed I set out empty mugs with my tea and spoon, so I only need to add hot water the next morning. Often one needs to muster the will to do this small thing for oneself…
  • Pre-measured baby feeds – Before I go to bed, I measure out all of the bits that will go into Bean’s first feed of the next day. I’ve still been using all the travel containers to premeasure the oatmeal and formula for mixing into oatmeal feeds. It saves a bit of time when I’m rushing and multitasking.
  • Drafting emails on Google Keep – this is my scratch pad where I draft out bits of emails. It is quickly available on all my computers and devices so I can paste completed emails in quickly at the start of the workday. I don’t send work emails after work hours because I think its important to observe the working day (and it is well-known that people will mainly check their email in the morning and so if you want a quick reply, you’ll want your email to come in right on top of their inbox for about 10am)…

In the next post, I swear I will finally complete my series on House Renovations in time for the 1 year anniversary of having moved into this flat!

Standards for N95 Respirators and Surgical Masks

I’m not a medical professional, but as a mother of a young baby and daughter of parents who are over 60 years old, I ended up doing so much reading up on masks/respirators that I thought I may as well collate my thoughts together in a post like this which might be useful for others to understand the standards for respirators and masks, and the difference between disposable dust masks, surgical masks, and N95 respirators. [Note: this post was many days in the making because I’ve been caring for a small and very wriggly baby, so do note that I started writing this post before PM Lee announced the ‘circuit-breaker’ in Singapore. I expect that guidances may change regarding masks over time…]

It is rather long, so here is a summary so you can skip to the specific sections quickly…

Pitta Masks and PM2.5 pollution masks do not filter for virus particles
The difference between Masks and Respirators
N95 Respirators Standards – US Standards (NIOSH) & EU Standards (CE)
Surgical Mask Standards (EN 14683+AC Medical face masks) vs Non-medical Filtering half masks Standards (EN149:2001+A1:2009)


 

Pitta Masks and PM2.5 pollution masks do not filter for virus particles

Recently I made the decision to withdraw baby from infantcare (having foreseen that there would be a long period where I would be asked to work from home – that prediction has now been confirmed as reality) and as a result my parents have been coming to my house to care for baby whilst I work from home. Because this meant that my parents were exposed more than me because of their transit from their house to ours, my mother expressed worry about taking Grab. So I decided to buy some masks first and foremost for my 60+ year old parents who are in the most vulnerable category.

Two months ago, my very crafty and industrious mother tried to sew some reusable masks out of cotton – but I just was not convinced that this was safe. Even from my layman perspective, how could a cloth filter the microscopic particles of a virus? So I went online and randomly bought a packet of the first aesthetically pleasing mask I saw online – the Japanese Pitta mask. But since then, I have read the fine print: Pitta Masks cannot filter tiny virus particles. Online you can find people who have done tests on the Pitta masks and found that they “captured an astounding 0% of 0.3-micron particles”.

😱
To be fair, the Pitta mask manufacturer does not advertise it for virus protection. It is simply that online sellers are putting it up unscrupulously without mentioning the fact of the matter that PM2.5 is not enough.


Image source: https://smartairfilters.com/en/blog/coronavirus-pollution-masks-n95-surgical-mask/
Read more: https://smartairfilters.com/en/blog/pitta-masks-protect-capture-coronavirus-virus/


The difference between Masks and Respirators

Having ruled out Pitta masks, I started trying to understand if I should look for Surgical Masks and Respirators.

In a 3M document I found online by their “3M Subject Matter Expert – Asia Pacific Region”, they state the different purposes of the masks vs respirators. It also says that “for any airborne particulate contamination such as an outbreak of PM2.5 / PM 10/ Severe Acute Respiratory Syndrome (SARS), Avian Flu, Ebola Virus etc. only respirators and not Masks, should be used to safeguard oneself from getting any kind of respiratory diseases.”

Respirators = designed to protect you from the environment
Masks = designed to protect the environment from you


N95 Respirators Standards – US Standards (NIOSH) & EU Standards (CE)

WHAT DOES A LESS THAN 0.1 MICRON PARTICLE LOOK LIKE? Does a mask really filter a less than 0.1 micron particle? Sadly, it seems that figuring out which mask can really filter such a tiny particle is impossible to us alone to judge; it is entirely down to a lab test – so you could say then that it is down to the various ‘standards’ that the batch of masks have to meet when they are tested in a lab. So this it led me to find out – who tests the masks to make sure they are up to the standard they say they are?

US Standard:
N95 Respirator [Filters at least 95% of airborne particles]
N99 Respirator [Filters at least 99% of airborne particles]
N100 Respirator [Filters at least 99.7% of airborne particles]
IS IT A REAL NIOSH TC-APPROVED CERT NUMBER? You can check here: https://wwwn.cdc.gov/niosh-cel/.

European Standard:
FFP1 Respirator [Filters at least 80% of airborne particle]
FFP2 Respirator [Filters at least 94% of airborne particles]
FFP3 Respirator Filters at least 99% of airborne particles]
Check for CE cert by looking at the cert’s issuing body. Unfortunately, there isn’t a centralised data base you can dial it into. For more, read here at CE-Check Support. But, I have found that some of the certifying bodies have got sites where you can enter the cert number to verify.

Besides those standards, there is the Chinese KN95 and Korean KF94 which are supposed to be equivalent in standard to N95. How do we verify them? Is there any way to verify them? I don’t really know. ¯_(ツ)_/¯

This is where it got murky… So I wanted to buy some working N95 Respirators. I dialed in “N95” on Qoo10, Lazada, Shopee and ezbuy. Here’s an example of the first respirator with dodgy papers which I found on Qoo10 from the seller “Collectible Haven”:

 

I took it one step further. I took the number they posted and checked it with the Standards body for NIOSH. THE CERT NUMBER THEY PROVIDED WAS NOT FOR THEIR COMPANY OR THE PRODUCT! It did have the same product number 1200F though, but I do feel deceived.

There were several other similar examples online but I won’t go through them. You can do the sleuthing yourself if you are concerned. But here is an example where I did buy the mask, and it checked out correctly.

Here’s an original box of 3M 8210 masks which on the list of N95 masks from the company, along with a diagram of how to tell if the mask is an actual N95 mask.


Source: How to check the NIOSH marking on the mask (ie: the N95 standard)

After discovering that several masks online that provided “documentation” of the masks which was fake, I decided to see what really constituted a surgical mask. They’ve been saying that surgical masks are what you need to wear, but that there were many cases reported in the media where people bought expensive ‘surgical masks’ which were very thin or suspicious. The Health Sciences Authority regulates the importation of surgical masks, and their website (currenly down) had stated that there was a difference between standard face masks (paper or cloth) and surgical masks (medical product, Bacterial filtration efficiency above 95%*).


 

Surgical Mask Standards (EN 14683+AC Medical face masks) vs Non-medical Filtering half masks (EN149:2001+A1:2009)

Some days back, I preemptively placed an order for what I thought were surgical masks in case we needed some.


These were the masks I ordered on Shopee from miluvs.sg


miluvs.sg posted a screenshot of the cert

These masks said they had a CE mark which I also went back to the original agency to cross-verify. Really strange to go to the effort to see if it was true, but I did and on the http://entecerma.it/ website (a weird marketing-oriented platform which helps foreign/chinese companies bring products to European markets through CE and other certifications) if you dial in the code from the cert – truly the certification comes up as authentic.

But at the time I didn’t check what device they were certifying for. What is “EN149:2001+A1:2009”? If you search for the original definition of the standard, this is for Respiratory protective devices: Filtering half masks to protect against particles. Which translated means it is a dust mask not a surgical mask. For a surgical mask, the European standard is “EN 14683+AC” – Medical face masks.


Source: miluvs.sg on Shopee

Again, this wasn’t false advertising on this seller’s part on Shopee. They did not say it was a surgical mask / medical product. I saw the picture of the mask and I personally assumed it was a surgical mask but in fact it is only a dust mask. A high quality, authentically tested and CE marked dust mask. Fair enough to them.

What is more unclear is that there are actually many sellers who DO sell their product as a SURGICAL MASK with CE mark but then the certs they posted show it is not a normal disposable face mask or some just posted nonsense. Here are 3 different examples (but there are countless more examples of this online)

Eg: Weilan777 on qoo10 – “surgical mask” with disposable face mask cert

Eg: OurFirstStore on lazada – “surgical mask” with cert for their machinery not the mask itself

Eg: Boslun on lazada – “surgical mask” with cert for different product

Did the sellers just assume that people would just glaze over looking at the certs and assume it was all good? You could say, “who cares about standards? just get the masks to the people quickly!” but then if you are buying this to protect a loved one, you don’t want to feel like you’ve been deceived into buying it, especially when the masks are being sold at increasingly cut-throat prices.


(There is also the question in my mind: is it ethical for me to be buying masks when there is a shortage in other countries? I acknowledge the privilege that I have to be in a comfortable financial position to purchase masks/respirators in Singapore where it is very readily sold on many consumer platforms – and also to be in a position to help others. What about people who can’t afford masks, or who don’t have a good home environment to spend the lockdown in? Since I am not in a position to volunteer, I looked into the organisations who are helping those who would be in need during covid-19. If you can, do consider giving to these groups:

AWARE – Vulnerable Women’s Fund: The COVID-19 pandemic underscores the already stark inequalities experienced by women in areas such as unemployment, housing, caregiving and domestic violence. This March, AWARE received 619 calls – our highest-ever number of monthly calls – with many callers dealing with emotional and psychological distress, violence and abuse. We need your help to ensure that we can continue to provide our services during this period, to aid these women in crisis.

Migrant Workers’ Assistance Fund: The assistance fund aims to provide emergency humanitarian assistance to distressed migrant workers. The assistance offered ranges from emergency shelter, daily essentials and basic sustenance needs to employment-related issues such as salary arrears. In the event where the migrant workers tragically lose their lives, the MWAF may also provide their next-of-kin with monetary assistance or in kind. The funds collected from our previous fund-raising activities have benefitted many distressed workers and helped them return to their home country. To continue to provide emergency humanitarian assistance and reach out to more migrant workers, who contribute to our economy.

Quick things to do in Blender: Video Editing, Bake Sound to F-curves, VR/3D Storyboarding, Compositing 3D model into photo, and Motion Tracking

I’ve used (and taught) Blender quite intensively for a couple years now but I haven’t really mined all its possibilities yet, and even today when I watch different staff and students work in it I still pick up new things from time to time. My selection criteria for these features is that: YES you could conceivably do all of these, even with just 5 minutes to spare and you are perched on the edge of the bed with your laptop and mouse, half-expecting baby to wake up at any moment…

Things you can do rather quickly in Blender:
Simple Video Editing
Bake Sound to F-curves
Simple Compositing of 3D model into photo
3D/360 Illustration draft with Grease Pencil
Motion Tracking to composite a 3D model into a Video

1. Edit a simple video with simple edits, cropping, overlays, audio, etc in very little time

Earlier in March I attended a training but because I’m a completer finisher sort of person I ended up doing the training material on two programs simultaneously just to see if Blender could do everything Premiere could do. It turns out that YES you can do video editing in Blender and it is even faster and simpler to do it than in Premiere! The interface looks very very similar to Premiere actually and if you go into the Video Editing view there is really no excuse for having any UI related issues because the interface is just so easy now!

Features you might need like text overlay, image overlay, offset, cropping, transitions, fade in fade out, panning audio, compositing, motion tracking – all of them are possible in Blender! I think might use this for my next video edit.

2. Bake Sound to F-curves

F-curve refers to the curve of the interpolation between two animated properties or keyframes. Interpolation has modes (linear, constant, bezier) and it has easing equations (linear, quadratic, cubic, exponential, etc), and stuff like that. But the funny feature in blender is the ability to bake a sound as the F-curve, or to make the sound wave the F-curve, such that your animation pulses along with the audio wave.

3. Do a sketch/storyboard for VR/360 or 3D illustration with the grease pencil

Personally I don’t use this quite enough but the grease pencil is super handy for making some rough sketches or even storyboard before you do an illustration work. For example, I saw a interesting video in which someone used the grease pencil to good effect to do a storyboard for a 360 work here:

You create an empty grease pencil mesh (Shift-A) and then go into the “Draw” mode. You can only draw on the flat image plane that you are facing. But after you draw it, you can move, rotate, and scale the grease pencil drawing at will and move it all around the scene. Many possibilities!

4. Composite a 3D model into a Photo (the simple way)

Somehow it got even easier and easier. You can set the world image background’s vector texture coordinate to “window” and when you look at the Render viewport your object is now in the world with all the colour and light from the background image itself. Works if you only had 5 minutes before the baby was about to wake up and you wanted something super simple. 😀

5. Motion Tracking to composite a 3D model into a Video

I decided to sit down and spend a few minutes trying out camera tracking which I’ve always known was a feature. Can you do it in a few minutes? Well yes, in the sense that Blender can do most of the legwork for you with camera solve but you’ll need to spend some quality time editing the tracks for best effect (especially for correct rotation). Above is an example of a terrible solve…. but it kinda works!

The Kappa’s Izakaya: 360° Illustration Process

Recently I worked on a 360° illustration of an Izakaya in Daryl Qilin Yam’s Kappa Quartet and I was asked if I could share a bit more about the process of doing such an illustration.

Artistic disclaimer: It just so happened that I watched a lot of Midnight Diner at the time when I was doing this illustration, so those spaces were definitely in my mind’s eye. There was also the show Samurai Gourmet which was a bit tiresome to watch, but had a few good shots of a traditional izakaya too. Alas, although I have visited Tokyo several times before, at this point I haven’t really been to a bar or izakaya in some years now…


From “Midnight Diner: Tokyo Stories”


From “Samurai Gourmet”

Some things I realised from these portraits of izakayas is that when in doubt on how to fill the bar space, you can put stacks of tiny crockery or cover it up with a cupboard!


I even made a little crackleware… not that the detail is visible in the final render

Another disclaimer: Where 3D modelling is concerned, I mainly design spaces and architectural/interior renders and I’m not a character designer! This will probably be apparent to people who actually do proper animation / character design because here I chose to render all the other people in the scene in this weird white low-poly form. Personally I thought it a good fix for my weakness and also that it kinda worked for this ‘horror’ piece…

Initially I thought that I would actually try to do the entire drawing by hand because I have actually enjoyed doing similar illustrations entirely by hand in the past – especially with lens distorsion like this:


2 illustrations from the set of 4 that I was commissioned to do for the Commuting Reader

I usually work out a lot of the alignment for this kind of illustration by doing a 3d model using a fisheye or panoramic lens. After arranging white blocks in the space and rendering it out, I just use those lines in the render as perspective reference for my drawing.


Example: this plain equirectangular render with no materials…

And for all other details that you need to fill in by hand, you can rely on an equirectangular grid (here is a link to an equirectangular grid by David Swart that you can use as a template) and think of it as a 4 point perspective drawing as so:

Here’s a 4 hour sketch I made using the grid for the fun of it in 2018…
(Back when I had a lot of free time huh)

Problem right now is that feeding and caring for Beano made it extraordinarily difficult for me to be able to use the tablet or cintiq. If left to her own devices, she wants to pull on all the type-c cables and gnaw on my stylus and then slap my cintiq screen! Attempts to set up my workstation in the bedroom so I can use the cintiq when she’s asleep have failed in baby safety. In fact I’ve more or less resigned myself to the fact that spending time with the tablet is impossible now – WHO WANTS TO BUY A MEDIUM WACOM AND/OR A CINTIQ PRO IN EXCELLENT CONDITION??? – and I’ve had to streamline my time spent designing, thinking of the fastest way to get the visual output. Hours spent doing digital painting like in the old days? Not happening anymore. A blender render is all I can muster now, which is great because whilst I feed and entertain Beano, I can easily set a render going so that I feel as if my illustration is partly doing itself whilst I’m busy…

I also use a renderfarm to speed things up a bit and I usually do a smaller resolution render to check that things are alright before doing the full size. At the 50% of the resolution I wanted it cost about 40-60 US cents (0.85 SGD) for each one. For the final render at 100% resolution and twice the samples, it cost about 4 USD (5.60 SGD).

I don’t know how most people do the next step but I usually go through a process of annotating my renders and then ticking them off in Monosnap through as I do the edits:

Finally we end up with the base render onto which I can add faces and other details in Photoshop. I do find that adding a bit of noise (0.5%-2%) also helps make it more ‘painterly’ because when the render is too sharp it becomes a bit disconcerting and unreal. I also drop the file periodically into this equirectangular viewer to see if the work is shaping up correctly – usually common issues might include (1) some things in the image that seemed further away may suddenly seem extremely close to the camera view or (2) items may be blocked when you render the specific view – so some time needs to be spent finetuning the arrangement.


Render Breakdown

This was another work made possible by the Dingparents who came down to take care of Beano on the weekends so I could continue my artistic pursuits! I am grateful to have the time to continue to make my work.


Come see the final image at Textures: A Weekend of Words, at Sorta Scary Singapore Stories by Tusitala.

13 – 22 Mar
10am – 10pm
The Arts House

Textures: A Weekend of Words celebrates Singapore literature and its diverse community. No longer a solitary experience, reading becomes a shared adventure through performances, installations, and workshops that will take you on a trip through the literary worlds of local authors.

The third edition of the festival takes on the theme “These Storied Walls”. Inspired by The Arts House’s many identities as a Scotsman’s planned estate, our nation’s first parliament, and now Singapore’s literary arts centre, the walls of The Arts House have been etched with the stories of those who have walked these halls.

This year’s programming features more installations and participatory activities that invite you to go a step further — move a bit closer and look a little longer. As you discover undiscovered narratives of your own, join those who have come before and weave your story into the tapestry of The Arts House.

Textures is co-commissioned by The Arts House and #BuySingLit, and supported by National Arts Council

More about Sorta Scary Singapore Stories

Paintpusher: Computer-aided Oil Painting (SUPER–TRAJECTORY: Life in Motion, ArtScience Galleries, 20 February to 8 March 2020)

Behold! This is a painting made by me and a little XY plotter which pushes the paint around. (I originally gave it a title with the word “sketches” in it because I like how it starts from a pencil sketch, to a processing sketch, then to this plotter’s wonky sketch that pushes paint around on the canvas… but now thinking about it, I am actually thinking I should rename the work to “Paintpusher” because it is not really painting, it is really just pushing the oil paint around on the canvas…)

Every once in a while I get gripped by a desire to teach myself how to paint hyperrealistic or photorealistic — just for the hell of it and being able to master it…? – but then I realise it will take me years of muddling along in the good old fashioned, humans-doing-oil-paintings-by-hand sort of way. Additionally, my own approach for understanding and making visual work has always been via the digital, so instead of mucking around helplessly in oils, I thought I would try to do a little “computer-aided oil painting”…

Doing ‘precision’ painting of any sort is messy and potentially very time-consuming, and now with a Bean to feed and care for (practically a 24hr job), carving out time to make art has been much more challenging (in addition to my teaching day job). Whilst spending long hours breastfeeding Beano, I had quite a lot of time to plot and scheme up things, but I only had rigidly fixed windows of time where I could personally execute the program (ie: when my parents were available to take care of Beano on the weekends). In theory, I thought that by devising a process for making the ‘precise’ paintings (and sticking to the process!) it would help me control the amount of time I was spending on “Debbiework”… although the prep work takes the longest in that case. This painting experiment would not have been possible without my parents coming down over a few weekends to help care for Beano whilst I made a big painterly mess.


The Mini Line-Drawing Machine

Line-Us Concept Image

Some time back there was a kickstarter for a little drawing machine called the Line-Us, which they rather pointedly emphasised on their promotional material as being “NOT A TOY”. Well then, what is it exactly? I guess it is a small usb powered plotter in which you can insert a pen and have it trace out an SVG file (you can also muck around by hand on their app and see how it partially messes up your drawing. There was also a concept gif they released, imagining it doing water colours.


Line-us plotting some random SVG that I made in Illustrator

The app that comes bundled with the Line-us allows you to draw on your mobile screen to control the Line-us. IT allows you to take a photo, put it in the background, and then you can trace over the image your self. Which ultimately produces something which is not dissimilar to something you might choose to draw with your non-dominant hand.

I’ve got to say that drawing on my phone to control Line-us’s pen doesn’t really seem like the point of having a device like this. I mean it makes for hilarious results from this NON-TOY, but it makes more sense as a SVG plotter, which I’m surprised isn’t the function of its main app. Maybe they dont want to get your hopes up of it being able to plot perfect squares and perfect circles… BECAUSE IT DOESN’T. I used this script contributed by another user (Set the IP to 192.168.4.1 and it will connect the Line-US when in red mode)

The joy of the plotter is really in its “shonky-ness”. It gets more and more askew as we progress further away on both axes. It wobbles and trembles and if your pen is tilted on an angle, the distortion from the tilt will become more and more pronounced at the extremes of the drawing board. One of the prominent apps touted for this “NOT-A-TOY” is a game where it will draw something (somewhat badly) and you have to guess what the Line-Us is trying to draw…

Painting Process


Initial Sketches

I started with some sketches of possible approaches. I had lofty dreams of doing a landscape painting at first, but in reality I don’t have that much control (or rather it feels like youre in a constant state of almost losing control of the pen), and I found that with this kind of work, less is more. The more you push paint around, the more it looks like an indistinct mushy grey. Like if you smeared your face over a palette.

Line-us Manual Control – painted too long until paint became muddy

This is the mess it makes when you “overpush” the paint (output now discarded). Using manual control on the app meant that it was no different from me being an exceptionally incompetent painter. The process needs to be rigorously followed for this experiment to be meaningful, and I knew by this point that I wanted to make iterative paintings…


Processing Sketch

Referring to some of my pencil sketches, I wrote a Processing sketch to produce the drawing. I had more intentional and complex sketches at first, but as you can see, I ended up with something exceedingly basic. A super basic bezier. To be honest, everything more complicated just didn’t make a good painting.

In Processing, you can use beginRecord to echo the drawing processes to a svg or pdf file. It generates an SVG file which comprises of the lines I drew with the code…


SVG Generated in Processing

And the SVG file is also readable by the plotter as a series of lines of coordinates which when joined up make the drawing.

The plotter outputs look a bit wonky, but the wonky-ness is consistent. If you made the Line-Us repeat the SVG, it would always outline over that same point, over and over again. So… it is very precisely inaccurate.

After testing out all the outputs, I prepared the canvas by using a palette knife to lay down a base colour that the plotter would paint over. I also experimented with using masking tape to mask out the area where I would be painting – I think the framing was crucial to the work looking as it does; without the framing, it just felt like a big messy paint blob; similarly without repetition one may not realise this is an iterative work or a work produced by a machine repeating an action over and over again.

After generating these tiny prints, I decided to digitise and blow up a set of 4 of them. I was originally only going to blow up one of them, but the output was better than I expected, almost resembling the fronds of a palm, with an organic form.

Initially I was going to get a normal photographic paper print when I happened to see at the printers how well the metallic prints seemed to bring out the colour, giving it more three-dimensionality. So… I decided to try doing my print on metallic and I love it!

The work is currently at ArtScience Museum, Level 4 until 9 March 2020. There wasn’t an opening event due to coronavirus cancellations. But come and see it when you can and let me know what you think of them. And as for next steps, I think I will build a bigger XY plotter!…


SUPER–TRAJECTORY: Life in Motion
20 February to 8 March | 10am–7pm
ArtScience Galleries, Level 4
Free admission

SUPER–TRAJECTORY: Life in Motion is a presentation of new media artworks from Taiwan and Singapore that reflects on the human experience in an era of instantaneity, transformation and conflict, where speed is the new scale.

Through a programme of installations and screenings, artists investigate the artistic and cultural consequences of new technologies, reflecting on what it means to be making art in an accelerating, media-influenced world.
The artists, in different ways, explore a digital world that generates itself and our longing for material qualities and tactile connections in our lives. We see Chih-Ming Fan, Ong Kian Peng and Syntfarm employ computational algorithms as interventions to the present moment as we are confronted with new realities; while Debbie Ding, Charles Lim and Weixin Quek Chong engage with the intimacy and agency of touch in an exploration of materiality and physicality in our relationships with technologies. In the works of Cecile Chagnaud, Mangkhut, Hsin-Jen Wang and Tsan-Cheng Wu, we encounter a delicate exchange with the artists’ worlds as they consider the notion of home and memory by mapping their personal experiences against the unprecedented impact of urbanisation.

Between today’s postdigital condition and the complex yet banal realities of contemporary life, this group of works poses the question: What are the humanistic values and principles in an increasingly formatted world?
SUPER–TRAJECTORY: Life in Motion at ArtScience Museum is a collaboration with INTERーMISSION (Urich Lau and Teow Yue Han), Tamtam ART Taiwan (Vicky Yun-Ting Hung, Wei-Ming Ho and Lois Wen-Chi Wang) and 臺南市美術館 Tainan Art Museum.

Exhibiting artists include Cecile Chagnaud, Debbie Ding, Chih-Ming Fan, Charles Lim, Mangkhut (Jeremy Sharma), Ong Kian Peng, Weixin Quek Chong, Syntfarm (Andreas Schlegel and Vladimir Todorovic), Hsin-Jen Wang and Tsan-Cheng Wu.

The first iteration of SUPER–TRAJECTORY, Media/Life Out of Balance (6 October 2019 to 3 March 2020), was presented at Tainan Art Museum, setting out this cross-regional platform for contemporary and experimental media art and exchange in discourses on technology in art.

More Info on Facebook