{"id":26,"date":"2019-12-20T07:23:00","date_gmt":"2019-12-20T07:23:00","guid":{"rendered":"http:\/\/dbbd.sg\/blog\/uncategorized\/playing-around-with-jupyter-notebook-sketch-rnn-neural-style-transfer\/"},"modified":"2021-05-13T09:48:44","modified_gmt":"2021-05-13T09:48:44","slug":"playing-around-with-jupyter-notebook-sketch-rnn-neural-style-transfer","status":"publish","type":"post","link":"https:\/\/dbbd.sg\/blog\/2019\/12\/playing-around-with-jupyter-notebook-sketch-rnn-neural-style-transfer\/","title":{"rendered":"Playing around with Jupyter Notebook, Sketch RNN &#038; Neural Style Transfer"},"content":{"rendered":"<p><img loading=\"lazy\" class=\"alignnone size-full wp-image-1136\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.13AM.png\" width=\"1280\" height=\"382\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.13AM.png 1280w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.13AM-300x90.png 300w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.13AM-1024x306.png 1024w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.13AM-768x229.png 768w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.13AM-1200x358.png 1200w\" sizes=\"(max-width: 1280px) 100vw, 1280px\" \/><\/p>\n<p><img loading=\"lazy\" class=\"alignnone size-full wp-image-1150\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.31AM.png\" width=\"1280\" height=\"958\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.31AM.png 1280w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.31AM-300x225.png 300w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.31AM-1024x766.png 1024w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.31AM-768x575.png 768w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.31AM-1200x898.png 1200w\" sizes=\"(max-width: 1280px) 100vw, 1280px\" \/><\/p>\n<p>This week as part of my work I went to a 2-day crash course in Tensorflow for NLP, which is admittedly ridiculous because (a) 2-days? what can one accomplish in 2 days? would we not be better off slowly studying ML via a mooc on our phones? or the <a href=\"https:\/\/developers.google.com\/machine-learning\/crash-course\">Google Machine Learning Crash Course?<\/a> and the <a href=\"https:\/\/www.tensorflow.org\/tutorials\">official Tensorflow tutorials<\/a>? (b) I am struggling with both the practical side (I have absolutely no maths foundation) and theorectical side (I don&#8217;t even understand regression models, but, I mean, do I need to understand regression models anyway?)<\/p>\n<p>Which then begs the question: DO I REALLY NEED TO PEEK INSIDE THE BLACK BOX IN MY LINE OF WORK?<\/p>\n<p>Or, WHAT IS MY LINE OF WORK ANYWAY? And how much technical understanding do I really need to have?<\/p>\n<p>Now I obviously don&#8217;t feel like I&#8217;m in any position to design the innards of the black box myself, but I&#8217;d like to be the person who gathers up all the inputs, preprocesses it, and stuffs it through the black box myself, so as to obtain an interesting and meaningful output (basically I&#8217;m more interested in the <a href=\"https:\/\/developers.google.com\/machine-learning\/problem-framing\/cases\">problem framing<\/a>). But existential crises aside, this post is to gather up all my thoughts, outputs (ironically unrelated to the course I was at, but this is a personal blog anyway), and relevant links for the time being (pfftshaw, with the rate at which things are going they&#8217;ll probably be outdated by 2020&#8230;)<\/p>\n<h1>Jupyter Notebook<\/h1>\n<p><a href=\"https:\/\/jupyter.org\/\">Jupyter Notebook<\/a> is the wiki I wish I always had! Usually when working in Python you&#8217;re always in the shell or editor and I make my wiki notes in a linear fashion to recount the story of what I was doing (in case I want to revisit my work at a later point). For the purposes of learning I find it most useful to think of it as a linear narrative.<\/p>\n<p>Jupyter is the new shell where you can do precisely that &#8211; write a linear narrative of what you think you were doing &#8211; alongside the cells of your code that you run. Its generally quite easy to set up Jupyter notebook via <a href=\"https:\/\/www.anaconda.com\/distribution\/\">Anaconda <\/a>which will install both Python and Jupyter Notebook and then you can paste the link from terminal into your browser.<\/p>\n<p><center><img loading=\"lazy\" class=\"alignnone size-full wp-image-1154\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/ch1_introduction_to_jupyter_pandas_exercises-Jup.png\" width=\"722\" height=\"703\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/ch1_introduction_to_jupyter_pandas_exercises-Jup.png 722w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/ch1_introduction_to_jupyter_pandas_exercises-Jup-300x292.png 300w\" sizes=\"(max-width: 722px) 100vw, 722px\" \/><\/center><img loading=\"lazy\" class=\"alignnone size-full wp-image-1156\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/ch1_introduction_to_jupyter_pandas_exercises-Jup2.png\" width=\"994\" height=\"678\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/ch1_introduction_to_jupyter_pandas_exercises-Jup2.png 994w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/ch1_introduction_to_jupyter_pandas_exercises-Jup2-300x205.png 300w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/ch1_introduction_to_jupyter_pandas_exercises-Jup2-768x524.png 768w\" sizes=\"(max-width: 994px) 100vw, 994px\" \/><\/p>\n<p><img loading=\"lazy\" class=\"alignnone size-full wp-image-1158\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/nlpworkshop_mnist.ipynb-Colaboratory-GoogleCh.png\" width=\"721\" height=\"753\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/nlpworkshop_mnist.ipynb-Colaboratory-GoogleCh.png 721w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/nlpworkshop_mnist.ipynb-Colaboratory-GoogleCh-287x300.png 287w\" sizes=\"(max-width: 721px) 100vw, 721px\" \/><\/p>\n<p><small>I could have embedded my notebooks instead of screenshotting it but I ain&#8217;t gonna share my notebooks cos these are just silly &#8220;HELLO WORLD&#8221; type tings&#8230;<\/small><br \/>\nLet&#8217;s say you don&#8217;t want to run it on local environment. That&#8217;s fine too because you can use the cloud version &#8211; Google Colab. You can work on the cloud, upload files and load files in from Google Drive. You can work on it at home with one computer and then go into the office and work on it with another computer and a different OS. You can write in <a href=\"https:\/\/github.com\/adam-p\/markdown-here\/wiki\/Markdown-Cheatsheet\">Markdown<\/a> and format equations using LaTeX.<\/p>\n<p>As an interactive notebook there are so many opportunities for storytelling and documentation with Jupyter Notebook. And if you like things to be pretty, you can style both the notebook itself or style the outputs with css.<\/p>\n<h1>Sketch RNN<\/h1>\n<p>I followed the <a href=\"https:\/\/colab.research.google.com\/github\/tensorflow\/magenta-demos\/blob\/master\/jupyter-notebooks\/Sketch_RNN.ipynb#scrollTo=iKHL-LmnSLpB\">Sketch RNN tutorial<\/a> on Google Colab to produce the following Bus turning into a Cat&#8230;<\/p>\n<p><img loading=\"lazy\" class=\"alignnone size-full wp-image-1159\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/bustocat.png\" width=\"917\" height=\"126\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/bustocat.png 917w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/bustocat-300x41.png 300w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/bustocat-768x106.png 768w\" sizes=\"(max-width: 917px) 100vw, 917px\" \/><\/p>\n<p>Love the <a href=\"https:\/\/quickdraw.withgoogle.com\/data\">Quick Draw<\/a> project because it is so much like the story I often tell about how I used to quiz people about what they thought a scallop looked like because I realised many Singaporeans think that it is a cake instead of a shellfish with a &#8220;scalloped edge shell&#8221;.<\/p>\n<p>I love the shonky-ness of the drawings and I kinda wanna make my own data set to add to it, and perhaps the shonky-ness is something I can amplify with my extremely shonky usb drawing robot which could use the vector data to make some ultra shonky drawings in the flesh.<\/p>\n<p><small>Now that I have accidentally wrote the word <b>shonky<\/b> so many times I feel I should define what I mean: &#8220;shonky&#8221; means that the output is of dubious quality, and for me the term also has a certain comedic impact, like an Eraserhead baby moment which ends in nervous laughter. (Another word I like to use interchangeably with &#8220;shonky&#8221; is the Malay word &#8220;koyak&#8221; which I also imagine to have comedic impact)<\/small><\/p>\n<p><center><img loading=\"lazy\" class=\"alignnone size-full wp-image-1161\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/treetrunks.gif\" width=\"480\" height=\"368\" \/><\/center>Eg: When Tree Trunks explodes unexpectedly&#8230;<\/p>\n<h1>Neural Style Transfer<\/h1>\n<p>I followed the <a href=\"https:\/\/colab.research.google.com\/github\/tensorflow\/models\/blob\/master\/research\/nst_blogpost\/4_Neural_Style_Transfer_with_Eager_Execution.ipynb#scrollTo=jo5PziEC4hWs\">Neural Style Transfer using tensorflow and keras tutorial<\/a> on Google Colab to produce the following:<\/p>\n<p>Beano x Hokusai<br \/>\n<img loading=\"lazy\" class=\"alignnone size-full wp-image-1163\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245848791_d218593872_c.jpg\" alt=\"Neural Style Transfer with Eager Execution - Colab3\" width=\"681\" height=\"800\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245848791_d218593872_c.jpg 681w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245848791_d218593872_c-255x300.jpg 255w\" sizes=\"(max-width: 681px) 100vw, 681px\" \/><\/p>\n<p>Beano x Van Gogh&#8217;s Starry Night<br \/>\n<img loading=\"lazy\" class=\"alignnone size-full wp-image-1164\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245378128_bb9bdb41be_c.jpg\" alt=\"Neural Style Transfer with Eager Execution - Colab4\" width=\"688\" height=\"800\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245378128_bb9bdb41be_c.jpg 688w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245378128_bb9bdb41be_c-258x300.jpg 258w\" sizes=\"(max-width: 688px) 100vw, 688px\" \/><\/p>\n<p>Beano x Kandinsky<br \/>\n<img loading=\"lazy\" class=\"alignnone size-full wp-image-1165\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245378138_2f2874e30f_c.jpg\" alt=\"Neural Style Transfer with Eager Execution - Colab5\" width=\"690\" height=\"789\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245378138_2f2874e30f_c.jpg 690w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245378138_2f2874e30f_c-262x300.jpg 262w\" sizes=\"(max-width: 690px) 100vw, 690px\" \/><\/p>\n<p>Beano x Ghost in the Shell<br \/>\n<img loading=\"lazy\" class=\"alignnone size-full wp-image-1166\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377468_08dd7006c7_c.jpg\" alt=\"Copy of Neural Style Transfer with Eager Execution gots\" width=\"677\" height=\"795\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377468_08dd7006c7_c.jpg 677w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377468_08dd7006c7_c-255x300.jpg 255w\" sizes=\"(max-width: 677px) 100vw, 677px\" \/><\/p>\n<p>Beano x Haring<br \/>\n<img loading=\"lazy\" class=\"alignnone size-full wp-image-1167\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377693_ae10242e84_c.jpg\" alt=\"Copy of Neural Style Transfer with Eager Execution_haring\" width=\"698\" height=\"789\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377693_ae10242e84_c.jpg 698w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377693_ae10242e84_c-265x300.jpg 265w\" sizes=\"(max-width: 698px) 100vw, 698px\" \/><\/p>\n<p>Beano x Tiger<br \/>\n<img loading=\"lazy\" class=\"alignnone size-full wp-image-1168\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377783_b44834a588_c.jpg\" alt=\"Copy of Neural Style Transfer with Eager Execution_tiger\" width=\"690\" height=\"794\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377783_b44834a588_c.jpg 690w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377783_b44834a588_c-261x300.jpg 261w\" sizes=\"(max-width: 690px) 100vw, 690px\" \/><\/p>\n<p>Beano x Klee<br \/>\n<img loading=\"lazy\" class=\"alignnone size-full wp-image-1169\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377583_96463cfe27_c.jpg\" alt=\"Copy of Neural Style Transfer with Eager Execution\" width=\"699\" height=\"784\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377583_96463cfe27_c.jpg 699w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/49245377583_96463cfe27_c-267x300.jpg 267w\" sizes=\"(max-width: 699px) 100vw, 699px\" \/><\/p>\n<p>How does this work? In the <a href=\"https:\/\/arxiv.org\/pdf\/1508.06576.pdf\">paper<\/a> it describes how you can try to find out what is the style of an image by including feature correlations of multiple layers in order to obtain a multi-scale representation of the original input image, thus capturing its texture information but not the global arrangement. The higher levels capture the high-level content in terms of objects and their arrangement in the input image but do not constrain the exact pixel values of the reconstruction.<\/p>\n<p><center><img loading=\"lazy\" class=\"alignnone size-full wp-image-1171\" src=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at7.51.34AM.png\" width=\"1280\" height=\"882\" srcset=\"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at7.51.34AM.png 1280w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at7.51.34AM-300x207.png 300w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at7.51.34AM-1024x706.png 1024w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at7.51.34AM-768x529.png 768w, https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at7.51.34AM-1200x827.png 1200w\" sizes=\"(max-width: 1280px) 100vw, 1280px\" \/><\/center>Image Source: <a href=\"https:\/\/arxiv.org\/pdf\/1508.06576.pdf\">&#8220;A Neural Algorithm of Artistic Style&#8221; by Leon A. Gatys, Alexander S. Ecker, Matthias Bethge<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>This week as part of my work I went to a 2-day crash course in Tensorflow for NLP, which is admittedly ridiculous because (a) 2-days? what can one accomplish in 2 days? would we not be better off slowly studying ML via a mooc on our phones? or the Google Machine Learning Crash Course? and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1136,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[],"tags":[41,40,39,43,42],"jetpack_featured_media_url":"https:\/\/dbbd.sg\/blog\/wp-content\/uploads\/2021\/05\/Screenshot2019-12-21at9.42.13AM.png","_links":{"self":[{"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/posts\/26"}],"collection":[{"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/comments?post=26"}],"version-history":[{"count":2,"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/posts\/26\/revisions"}],"predecessor-version":[{"id":1356,"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/posts\/26\/revisions\/1356"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/media\/1136"}],"wp:attachment":[{"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/media?parent=26"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/categories?post=26"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dbbd.sg\/blog\/wp-json\/wp\/v2\/tags?post=26"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}