Explaining what is art and interaction design to 2-5 year olds

It was “Occupations” day at Bean’s childcare where they had asked if parents could come in and share about their jobs. You know, maybe wear their daily uniform and maybe bring down tools or equipment to show the children. I noticed they had prepared a wall full of common occupations, like being a firefighter, a nurse, a truck driver, a teacher….

Someone’s gotta represent the hybrid-techy-hyphenated jobs of the future, so I prepared a little introduction to myself… and brought down a touchboard and many bananas and spoons, and tried to explain that they could make art on computers. The setup (which seems suitable for all ages, including adults actually) I had brought down involved Bare Conductive’s touchboard, a mini speaker, crocodile clips, and a bunch of bananas and spoons. This was my deck:

Some Observations:
  • Upon seeing bananas, children often want to eat the bananas.
  • Some children examine the connection points and want to disconnect and reconnect the bananas
  • The word “touch” is quite ambigious and does not define how you touch a banana. Do you whack the banana? Do you squeeze the banana? Do you tap the banana lightly? Do you rest your entire hand on the banana?

Green Screen Studio Glitches

A few weeks ago I was involved in a shoot at a green screen studio and I ended up 3D scanning the green screen studio itself for the fun of it. I like the speediness with which I can clobber something together and the glitchiness it brings, like a digital version of an abstraction or a ‘painterly stroke’.

I bought a Logitech C920 recently so I could seperate my webcam from my mac’s casing and make funnier shots like these. Feels like what’s been missing in a lot of the images I make are people. Inserting myself into the image is what makes me sit up.

OBS Live-streaming, Unity+Tensorflow Experiments, & “Y-Lab presents: Extended Reality”

Recently I have discovered the many uses of OBS (Open Broadcaster Software), a free open source software for all your live video needs!

When I first thought of doing some livestream recordings as a way to present new digital works – I didn’t realise it meant I actually needed to up my studio game with lights and proper mics, and now it seems I might potentially need to get a better web camera further down the line. The first thing I shot involved a webcam image of myself that was way too dark. On  the 3rd occasion when I did the livestream, the dingparents invited me to come over to shoot my things at theirs when the hacking noises in my block were too much to bear, so I lugged over my rickety portable setup… which looks like this behind the scenes:

“AMATEUR HOUR” Streaming Setup

I must confess it is still a bit like amateur hour here. Somehow the live event recording I made was still not at its best – there were a few bitrate issues I haven’t quite sorted out, and I don’t know why I felt that when things are more chaotic my “performance” seems… better? Since I don’t have a fixed workspace at home (in the daytime I work from the living room, in the night whilst Beano sleeps I work in the bedroom watching over her), so my setup has to remain quite mobile and portable. It feels like if I’m serious about doing livestreams as a format then I actually have to set up a more dedicated space in the house for all my ‘streaming’ shenanigans…

The Bread & Butter Stream

This week I also participated in my first professionally managed livestream and actually yes until this point I thought that everyone just had to live through the awfulness of juggling countless screens – it hadn’t occured to me to outsource it to someone else. I do not have multiple monitors so when I attempt to stream it means managing a halfdozen tiny windows on screen and constant switching between fullscreen modes (and having a niggling feeling that when windows are not maximised or arbitrarily resized, I must be totally messing up all the screen proportions / resolutions / bitrates / framerates – and yes all those are very specific important things which can indeed screw up the livestream output).The Extended Reality Panel on 16 July 2021

Oh, so did you think you would watch a few videos on how to become a twitch streamer and how to use OBS and would WING IT ALL? Did you ever consider setting aside budget for your livestreamed productions? Well then, get thee a stream manager to juggle all your windows! (* ok unless you have no budget in which case… you have no choice but to shuffle on back and handle all your window management duties by yourself…)

Chroma Key in OBS with a green ikea blanket and table lamp pointed at my face? Either my teeth disappear or my blouse goes…

So…. I have been slowly figuring out how to use OBS. For example, don’t want to take over everyone’s screens with a screenshare during a Zoom call? Why not screenshare within your camera feed like I do here, except that my chroma key filter setting was slightly borked (lighting? greenscreen issue?) and everyone asks me why I am looking “a little transparent” today. -_-

Another fun thing I realised about OBS is that now anything at all on my screen can be my camera. Yes, so I can just put a youtube video on and feed it in whatever nonsensical build I am experimenting with at the moment.

In this case, this week I was playing around with Unity Barracuda + Tensorflow, which resulted in these test images where I ended up with the classic ridiculous “let’s do a tour of all the random things on my table/floor”:

Image Classification with Tensorflow in Unity

But actually, nobody really wants to have to continuously hold stupid objects up to a camera to test whether their image classification app is working… when you could just videos in the background to do the virtual equivalent of holding things up to the camera! And I simply connected Unity to my OBS virtual camera output. Basically now I can just grab a window capture from any web browser (or even youtube) and feed it into any app that takes webcam/camera inputs!

Tensorflow Image Classification Experiment x Music Video to “Sigrid’s Mirror”:

Tensorflow Image Classification Experiment x Music Video to “The Weeknd’s Blinding Lights”:

I am having fun reading the (mis-)classifications but in any case this is just using some ONNX computer vision models right out of the box and… IF YOU FEED IN PEANUT PICS, YOU GET PEANUTS!

Other Interesting Things to do with OBS

Easy Scrolling Text Marquee: Add New Text (Freetype 2) > Right-click and select Filters > Add Scroll > Set Horizontal Scroll to about 10-30.

Fast Colour Grading: Search for free cinematic LUTs online. Right-click the video capture device > select Filters > Add LUT.

Lookup Tables or LUTs are just a set of numbers that provide a shortcut for how your your colour input will be transformed into the final colour output. The LUT file is either in the form of an image like the one below or in a .CUBE file (a human-readable file with a set of numbers for colour conversion use, so its basically saving you from having to endlessly tweak some curves here).

(Tip: if you want to export a specific effect from some other app, you can take this original default png and apply any filter to this image – whether from instagram or vsco or photoshop – and use the new image as your LUT)

Virtual Classroom Panopticon: A (former) colleague of mine once shared that he got all his students to use OBS to stream themselves working on their computers as their camera feed, so he could see all their work at once without having to get everyone to share screen, easily creating a Zoom-panopticon in the virtual classroom…

A replay of Y-Lab Presents: Extended Reality is online here:

Y-Lab Presents: Extended Reality
@ Y-Lab Online | 16 July 2021 2pm-5pm
AR. VR. MR. XR. These technologies are everywhere, as what used to be physical is now going virtual or phygital. What are the opportunities and limitations when it comes to working with these technologies as individuals and institutions? Whether you are an artist, a curator, or simply curious – join our diverse panel of speakers to explore the state and the future of XR together. Come find out about the role that you can play in this space. :wink:

  • Intro to Y-Lab by Kevin, National Gallery of Singapore
  • An Insider’s Look at XR by Lionel, iMMERSiVELY
  • The Evolution and State of XR (Technology Landscape Scan) by Yi Ren, Unity Technologies
  • Reimagining Arts and Culture with XR (Use Cases) by Jervais, National Heritage Board
  • init Virtual Worlds by Debbie, Visual Artist & Technologist
  • Intersections Between Art, Tech and Start-ups by Eugene, Dude Studios
  • Storyliving with Embodied Performances by Toph, Tinker Studio
  • Volumetric Captures and the Afterlife by Shengen, Cult Tech
  • Q&A with Panel

An Apocalyptic City in Blender: Lazy Speedrun

I was watching another lazy tutorial and had the impulse to try it out for myself. So here is a speedrun of an apocalyptic city. One basic building multiplied many times. No need for elaborate post-processing stacks or piles of artfully arranged rubble, this is the MVP (minimum viable product) shipped to you in 20 minutes (or less, in the case of this slightly sped up video…)

I think that me making these sort of videos is the present-day equivalent of trying to embark on digressions when I have an exam or important project to complete; instead I suddenly get all sorts of ideas to do ridiculous things like make more screencasts of myself doing something in Blender.

For years I’ve watched countless online tutorials on YouTube, many of which were set to some generic vaguely-inspirational electronic music. (I confess that I have playlists full things like youtube tutorial classic Tobu’s Candyland and other NCS Releases) and I took great joy in choosing royalty-free background sounds for this.

People, the keyword for this type of tutorial background music is “CORPORATE TECHNOLOGY”. Don’t go for CINEMATIC or DRAMATIC or INSPIRATIONAL, as there is a chance it might end up too teenage-over-the-top self-aggrandising. As it turns out “CORPORATE” plus “TECHNOLOGY” usually results in something blandly aspirational and futuristic.

A Shopfront in Blender and Unity: Lazy Speedrun using UV Project from View

After encountering Ian Hubert’s World Building video (how did I not see this until now?) I had an epiphany about a different way of making in Blender, besides photogrammatising things or modelling everything up from scratch. For many years I had actively avoided trying to understand UV mapping because I considered it too time consuming, and like he mentions, it is this 3d person whose face is the stuff of nightmares:

HA I have surely attempted to create and unwrap a face like this countless times only to horribly botch it and create something unintentionally horrific (and horrific but in not even an interesting way).

Every time this happened, I had simply accepted this to mean that I was not likely to make it as a character designer or a UV mapping specialist in this life… I mean, you gotta pick your battles. But everytime I saw this map it was like a symbol of all the UV mapping I would never learn to do because I AIN’T GOT THE TIME TO DO IT…

So the UV project from view is an absolute game changer. I actually used the UV project from view to make some pieces of bread previously (for the titular BREAD in the BREAD AND BUTTER game i am working on), but I hadn’t connected the dots to the possibilities… till I saw this…

As a trial run, I did a screen recording of myself doing a speed run making a shop front in Blender and importing it into Unity which took 14 minutes in real time (including a lot of hemming and hawing and undoing). In fact, the editing of the video you see here in iMovie took way longer at 40 minutes (according to Rescuetime), including video exporting and uploading time.

The image I am using is a picture of Hiep Phat from Walworth Road Yes I know it is not even in Stoke Newington, but just another shop found via the keyword “Church Street Stoke Newington”. Sometimes you just need a little hook to get you started. The image can be found on Flickr from the user Emily Webber and it is shared on a CC BY-NC-SA 2.0 licence.

Ironically yes, I have titled this as a SPEED RUN using a LAZY technique because the point is that I ain’t got time to do it the complicated unwrapping way! I’m not sorry that I didn’t even unwrap the ground (pavement in front of shop) totally because even without the ground properly unwrapped it kinda passes muster!

The resulting shop front is very acceptable and definitely usable as a game asset that you might see glancingly from a distance.

Bread and Butter in a Field of Dreams (Coming July 2021)

This July, I’ll be releasing a Free-to-play interactive experience titled “Bread & Butter In a Field of Dreams” for Mac/Win Desktop. But actually, you could say that this project originated as a project under a different name – “The Legend of Debbie“…

Do you want to get a reminder when
“Bread & Butter in a Field of Dreams”
is released for download,
or to hear first about
Debbie’s upcoming projects?
Join Debbie’s newsletter for all DBBD updates!


The Legend of Debbie” was originally made as a commission for Asian Film Archive’s State of Motion in January 2021 and it was my way of trying to use the archive of my own artwork as the source material for a sprawling game, exploring the different works as strange portals transporting you to weird spatialised versions of the works, and splicing my works with a partially fictionalised narrative (approximately 25% fiction, 75% reality).

The titular “legend” for the work was this directory which categorised my works into many different categories. A map legend. When I had time I was going to put more symbols all over the place, maybe have a little radar map overhead as well. I also had a lot of fun designing different rooms to represent different works.

I originally wanted to design a LIVE VR experience for the “The Legend of Debbie” and rather than to release the game (because this would take so much more testing for the development side of the project rather than running it as a moderated tour), I would run it as a live event (workshop) where participants could come down in different timeslots to experience this VR game (facilitated by myself)….

Imagine how fun it would be rolling through these odd spaces…

But then the Phase 2 Heightened Measures kicked in again, so we couldn’t have live events like this anymore. So… I did not make a VR version for “The Legend of Debbie”. And in any case, there was something that disturbed me about the final presentation of Legend.


I have come to the conclusion that there is no room for nuance. Or maybe I am not very good at nuance (it is something I am working on, but I suspect that nuance does not come easily to me mainly because my real life personality is too excitable and shouty and maybe a bit childlike and overly earnest at heart).

Instead of developing The Legend further, I somehow ended up making a completely new game from scratch. One in which very deliberately NONE of the works were shown in the game world in their original form, besides the first room which replicates the Wikicliki exhibition by the Singapore Art Museum, currently in the Ngee Ann Kongsi Concourse Gallery (Basement) of National Gallery Singapore. The show runs until 11 July 2021.

Since we couldn’t be in the gallery itself for the talk, I had re-created the gallery for a talk on 29th May (A conversation between myself and curator Mustafa, whom I have worked closely with during the last few months.) Instead of boring slides, based on the items that Mustafa was interested in discussing about, I brought them into the gallery space through the various 3D modelled props on a table, including a few laptops handily scrolling through my actual Wikicliki and a spreadsheet of the Here the River Lies cards (many credits to George for painstakingly digitizing them).

From this totally realistic representation of a real exhibition you eventually get teleported to another world where there are lots of objects which are directly representative of the projects I’ve worked on over the last 10 years, but nothing is represented in the original form that it was made.

In the world of the Field of Dreams, every single artwork I have made in the last 10 years is turned into a transmogrified version of itself – a pop translation of the work which could comfortably exist within a commercially lucrative museum retail shop (a la MOMA shop or NAIISE or any one of those shiny design shops)… or in a dusty basement reading room within an alternative community-based establishment for which there is no lack of heart but financial viability is always a question (such as The Substation’s Random Room).

Somehow making art is an act of translation for me. I don’t really seem to start by drawing or sketch, but by writing, and then I have to translate that into sketches, and from sketches into whatever digital medium I am doing. And this act of translation seems so arbitrary at times. Many ideas could have turned out differently had I chosen to make them in a different medium. Perhaps this expresses the tension I feel between making work as an artist and work as a designer/design educator (which earns me my living). The art can be poetic and ruminative and open-ended whereas the design has to fulfill the brief requirements and ultimately has to be functional (and most likely measurable).

So I thought that instead of a Space Geode rendered all in white, I would have a mildly tacky Space Geode Citrus Squeezer; instead of The Library of Pulau Saigon, its various components would be turned into functional items such as a Tic-tac-toe-set featuring the Chinese Spoon as the naughts and the Political Party Badge as the zeroes (something with the potential to be a slightly tacky coffee table centerpiece). My pulsed laser holography work, “War Fronts” would be rendered instead as a Jigsaw set. And instead of my print of 100 of my dreams from my Dream Syntax book, I turned it into a Scratch-off-chart of the 100 dreams. Because scratch off maps are all the rage now on everyone’s internet shopping list, aren’t they?

Along the way I er…. got a bit too excited because who needs to write a book when you can just make the cover for the book? I was churning out dozens and dozens of pdf book cover textures to populate the DBBD SHOP.

So, perhaps we can’t quite call this work “The Legend of Debbie 2.0” anymore. Maybe this should be called by the name that seems more appropriate for it now: Bread & Butter in The Field of Dreams.

The work takes its name from a 2013 ACES study by the NAC – apparently the first survey of its kind done on arts and cultural workers to examine how on earth do they make their living. I do not know which unnamed arts/cultural worker would give the survey such an evocative name, but here I have made the breads and butters literal, to be collected up before you can gain entry to the next scene.

Special mention also goes to another big survey I participated in not too long ago, which asked artists some very sobering questions about what we thought had advanced our artistic careers or had inhibited our careers, with a dropdown list of items that could potentially limit our careers being twice as long as the advancing list. (In an earlier iteration of the study, it suggested that we dig up our past 10 years of tax returns to examine the difference between our art-income and non-art income. Me, I almost thought this was like some cruel form of “formative assessment” – “Alright, you got me, I’ve NOT been solely living off my earnings as an artist, and in fact, at times this whole “art” thing is frequently a complete loss leader operation!”) I have many ambivalent feels about this. One one hand, my desire to make art isn’t about the money, but on the other hand I also do want to fix the current state of affairs…

There’s a maze and some other weird shizz coming up…

The world is still very much a work-in-progress and I look forward to fleshing it over for July’s “workshop” and to be able to release it as a free game for download! My goal is a release by July 2021! Me thinks I might even do it as a gameplay video – I quite enjoyed this live stream (ticketed as a workshop, but really more like a twitch stream with me having set up OBS and all the ridiculous animated overlays and chats)

I also did another detailed breakdown of the time I spent on this last week using Rescuetime. Rescuetime tracks the time I spend in each app and it is handy in that it breaks down the time I spend into working hours (defined as 9am-6am) and non-working hours (6am-9am) so I can sift out the time I spend on personal projects versus time on my day job. My secret to ekeing out the time is usually to work for 1-2 hrs after Beano sleeps at night and wake at about 4-5am to work.

It goes to show that despite working full time and having a time-consuming baby bean (with help of dingparents dutifully caring for her whilst I work), it is still possible to eke out the time to maintain an active artistic practice if one has the will to do so (and the disclipline to wake up early).

It does feel like a culmination of 3D skills I have taken years to acquire:
2014: when I realised how non-existent my 3D design skills were
2016: when I made myself try to make one blender render a day
2017: intentionally producing new works using 3D
2019: intentionally producing new works in Unity (very basic at that stage)
2020: taking the Unity Developer cert at my workplace, supervising more Unity-based projects
2021: being able to build things like this in a week (on top of a seperate full-time job)

I’ve seen several GDC talks and devlog videos on youtube detailing how every successful game dev probably has dozens of “failed games” before they finally make the one game they are happy with, that one breakthrough game. Likewise I don’t expect Field of Dreams to be perfect on its July 2021 release but I hope to collect lots and lots of feedback after releasing it so I can improve the experience!

Do you want to get a reminder when
“Bread & Butter in a Field of Dreams”
is released for download,
or to hear first about
Debbie’s upcoming projects?
Join Debbie’s newsletter for all DBBD updates!


Streetart Straße

Everyday when I open up a taxi or ride-share app to book a ride to work or a meeting, I have noticed one detail that sticks out on my map: there are several points near my home that are labelled “Streetart Straße”. Indeed, beautiful murals on shophouses is a common sight in the area that I live in, but why on earth are these points being highlighted to me above other actual landmarks here?

Why is it in German? And why is it that it seems that this map item has been set to show even at the highest zoom levels where most other details are filtered out? (Map zoom levels refer to how at the lowest levels, you might only see continents and broad country labels, but at the highest level, you see cities and their details. Data is selectively shown at different zoom levels, so that the map remains readable).

So I decided to google it a bit…

Contributions by Clara95 on OpenStreetMap

The answer is mundane. It appears that a (likely born 1995, female) German traveller toured through Singapore, Thailand and Vietnam and decided to create a half dozen map points of Street art, fast food, pizza places, and bus stops on Openstreetmap.  ¯\_(ツ)_/¯

Not gonna lie when I remembered that OSM was editable that my response to seeing this (and it being that I’ve lived here for years and still haven’t left my mark on Openstreetmap unlike a traveller through these parts…) it immediately led me to this… reactivating my account…

I’ll report back when I’ve finally managed to make a positive or interesting dent in the REAL MAP OF THE WORLD…

* Oh but also not this kind of dent. I found this when browsing in editor view. Por… whoever you are, er… we don’t need to know your exact house unit!!!

Wikicliki Gallery: The Prototype

Me and Mustafa had wanted to do a talk for the Wikicliki exhibition inside the gallery itself, but we couldn’t do it due to the heightened phase 2 Covid-19 restrictions in Singapore at the moment. So, I decided to challenge myself to create a replica of the gallery so that we could wander about whilst speaking and look at the artwork.

The simple base model was made in Blender.

I imported the model I made in Blender into Unity and then added the materials there. I also downloaded a free Mars Skybox and threw that in the background because WHY NOT. Finally, instead of having slides, I brought in 3 tables and put all the bits and pieces on the table, including a copy of my Dream Syntax book, a hard drive that looks like a hard drive currently on my table, and some laptops which play back videos that make it look like someone is browsing my websites – so we could do a live show and tell.

How much time does it take to make a small interactive thing like this? I usually don’t have an exact number for this, as with many things that involve programming. Sometimes I get stuck on problems for ridiculously long, and sometimes I simply have a nap and BOOM I wake up and have been gifted the solution in a dream. So… for once, I have made a breakdown of the actual time I spend making these “things”.

According to Rescuetime which I use to track all of my personal devices (personal laptop, tablets, mobile phone), it took me a total of 45 minutes to make the entire model in Blender on Friday 7 May 2021 (after working hours).

Rescuetime calculates my working time to be 9-6pm, and shows two graphs for what I do during my working hours and what I do outside of working hours.

The report for that particular day also incidentally shows you where I sometimes fritter my time away when I am not working – er…. randomly scrolling on Tiktok, youtube, contemplating whether I should order a foodpanda, opening late night DM student messages on Slack, accidentally typing “FACE-” into my browser and then immediately stopping myself and closing the browser window)…

Here’s another full screenshot of my Rescuetime dashboard for the day before the talk: Friday May 28.

As it is to be expected, my teaching day job involves a lot of Zoom meetings, looking at Google Slides/Docs/Sheets, and Whatsapp communications. And to this list, I realised that the no less than 4 solid hours I spent on Zoom attending student presentations on Friday using my Work PC are not reflected!! (I guess I am a two-computer person when I am at work, mainly typing on my personal laptop as I watch the Zoom on Work laptop. I suppose if they totted up the numbers from TWO computers at once it would show me using too many hours in my 24 hour day?)

Week of 24-29 May 2021 – Time spent working in Unity
Monday – [Did not spend any time working in Unity, instead wrote and published my first newsletter on Mailerlite, declaring my intention to make a thing for the Saturday talk]
Tuesday – 1 hour 18 minutes
Wednesday – 30 minutes
Thursday – 2 hours 13 minutes
Friday – 2 hours 15 minutes
Saturday – 2 hours 17 minutes
Total time spent in Unity: 8 hours 33 minutes (8.55 hrs)

What do these numbers mean?


I don’t know. I just know its important to understand how long each task takes me so I can gauge what is feasible for me to complete, or use the numbers to improve on my so-called ‘velocity’ for a project of this type, so I guess I’ll continue collecting data until I find a way to analyse this data.

Here’s an itch.io for all half-bakery items: https://dbbd.itch.io/

Do you want to hear first about
Debbie’s upcoming projects?

Join Debbie’s newsletter for all DBBD updates!

Sucked into the vacuum of a black hole: Unity Shader Graph

One of the effects I’ve liked in games is the “everything being sucked into a black hole vacuum” look. What are the correct words to describe it? Does this effect already have a proper name or keyword? In Virtual Virtual Reality, the conceit is that you’re the human employee pruning and vacuuming some machine’s lovely domestic abode and suddenly without warning it is as if you have done a terrible thing; you’ve accidentally sucked the first layer of the world away by mistake!…

Today, I was reminded of it again whilst watching the (ever inspiring) devlog for the indie game Farewell North, so I wanted to figure out how it was made. In Farewell North, it seems like it is being used in the playback scene for memories; the visual effect of being sucked back into the projector is exactly what makes them feel like ethereal memories being played back.

This evening I spent a while trying to figure it out. The answer seems to be using Unity’s Shader Graph (which I’ve actually never properly used before, but it reminds me of Blender’s shader nodes, so I guess roughly get the node based system). I looked around for examples and explanations of how it was created. I am glad to say that with all the power of the internets and online resources, it was indeed possible for me to understand how one can recreate the “sucked into a vacuum” effect. Lerp refers to linear interpolation and the value will change from a to b over t. There’s a Vector3 to set the origin point of the “BlackHole” or where everything will be sucked into / spat out of. And then there is a slider property for the “Effect” (a bit like “time” in this case) which can be helpfully tweaked in Inspector for testing purposes. “Range” is a fixed value. There’s obviously a lot more I can experiment with Shader Graph. But for now… a working example of the “Sucked-into-a-blackhole-vacuum” Shader Graph looks like this:

My basic version of the “Sucked-into-a-blackhole-vacuum” look…

Imagine my Spheres and Cubes teleporting endlessly from this world to another world and then back again – oh wait now you don’t even have to imagine it, here’s an actual visual representation of it!

Low Poly 3D People of Second Life

Despite being a long-time casual Second Life user, I have always been using a Macbook Pro which has always consistently seemed nearly unable to handle the graphics for Second Life. You would think that I would just switch up to a proper gaming pc by now, but somehow despite having tried to switch to Windows, I still have a preference for the Macbook Pro…

The price for sticking to the Mac is that in order to avoid lagging in Second Life I have gradually turned the Quality lower and lower, and the Draw distance smaller and smaller, until I’ve even on occasion set it to a ridiculous 64m. It means that things in the distance (a range which I can clearly see) do not load until they are within 64m of my virtual self. When I zoom in and out, things appear and disappear, meshes load and unload.

So in Second Life there’s the concept of “Land Impact” of how when you upload certain meshes in certain scales, even if you thought it was a low poly model in Blender, it may be interpreted as a complex mesh due to the “level of detail” settings on the model at the point of importing. So a lot of shops on SL Marketplace sell their wares by advertising the low land impact that their items have. No use having something beautiful but can’t be loaded by many people because it uses too many resources. There’s a very interesting post here about how ‘detailed’ meshes can be uploaded with low land impact and there’s much to understand how about the scale at which one imports the file and the LOD rings ( level of detail) which affects how the object is viewed from different distances.

So recently whilst walking around the “Village de Provence” in La Garde-Aris in Stringray Bay, I encountered a holiday scene of visitors to a village tourist spot. Lest you feel lonely in this beauty spot, like many other places in Second Life, they’ve scattered lots of 3d people all over the place so you can feel like one with the other holidaymakers and shoppers…

But don’t worry, these low poly people are just about to load up properly!!

Yeah, don’t you go imagining some fantastic hi res metaverse experience when I tell you about me walking about in Second Life… because this is actually what my Second Life experience is like sometimes on this machine… <insert sweatdrop>