cft

Tutorial for Artists on how to use a Neural Network — Part 2

What is a CLI?


user

Aslan French

3 years ago | 12 min read

Originally published at www.jackalope.tech on April 30, 2018.

Okay so we’ve installed the Ubuntu partition last week, and now we’re going to install the neural network Deep Style. This is where stuff is probably the most difficult. I’m going to equip you with the tools to solve those problems.

What is a CLI?

When you use a program such as your internet browser, or Photoshop you are using a Graphical User Interface. GUI. Before GUI there were CLI . Command Line Interface. A GUI allows you to control a program using buttons. A command line interface allows you to control it using written commands. When you tell Siri to navigate you to your friends house, you are in a way using a modern version of a CLI. Siri is much more sophisticated though.

It can listen to your voice and translate that into written words. It can ask for clarification, or intelligently make assumptions about what you actually want. A CLI is much more simple. It’s very literal and strict in how you give it commands. In a way using a CLI is a lot like using a programming language. FEAR NOT! It’s also a lot like asking Siri to do something for you. It’s not really all that hard.

A big advantage of a CLI is that you can create “batch scripts”. Batch scripts are where you automate a bunch of tedious jobs using a programming pattern. I’ll be covering batch scripts more in Part 3 but for now what you need to know is that the name of the language that you write scripts in Linux is called Bash.

There are some people who spend their entire careers mastering Bash. You won’t need to know very much to use the neural network deepstyle though. What you will need to do is use Bash to install the neural network. I’ll explain every step along in the process for you.

If you feel up to it, I highly recommend this interactive tutorial on Codecademy. It’s actually a lot of fun! You use Bash to do things like edit or write poetry. This is not necessary though.

To access a CLI program you need to open your terminal. Ubuntu you can do this by clicking on the start/programs button, and then typing “terminal”. It will search for the default terminal application. Then just press Enter and you’ve got it!

Sometimes throughout these tutorials you’ll see me using a weird looking terminal that looks like an old computer screen. It’s called CRT: Cool Retro Terminal and I decided to use it because I wanted something cool looking for the tutorials but the default terminal as picture above is all you really need and they function basically the same.

The Big Picture

Here are the broad strokes of what we are going to do.

  1. Install the dependencies for the Deep Style Neural Network.
  2. Install the Deep Style Neural Network
  3. Install CUDA to enable GPU processing.

What that means:

Dependencies are pieces of code that a programmer uses to help build their own code. Good code is like lego blocks. You can snap them in and out.
You can think of dependencies like standardized auto parts. If I wanted to build a car I wouldn’t make my own tires or bolts or even the engine from scratch. Programming is all about standing on the backs of giants.

The main dependencies we will download are Torch7, and loadcaffe. They also have their own dependencies. Torch7 will need LuaJIT and LuaRocks. Loadcaffe will need the Google Protocol Buffers. You don’t really need to know what any of these things are, just like you don’t need to know how exactly the spark plug works to install it. In the broad strokes Torch7 is a framework for people to build their neural networks upon.

Loadcaffe is an implementation of the Caffe framework in Torch7. Caffe is a neural network framework made by UC Berkley to do image classification and segmentation.

Finally CUDA is a framework by Nvidia that allows programmers to use the GPU to do CPU type work. GPUs are great for doing calculations when they have to process large chunks of data at the same time. Without CUDA your computer will only be able to use the CPU to process the neural network. That will limit the size of images you can produce, and it will take much much much longer to process.

The important take away here is that there is A LOT of people working together to make this stuff possible!

How to get help

Nobody can do this stuff alone. One of the most important skills in programming is being able to ask for help. Stuff is fixing to get pretty technical, so you’re going to need the tools to troubleshoot problems when (not if) they arrive.

Ways you can get help:

  1. You can message me at my facebook page or email me.
  2. You can post an issue to the Deep Style Github page. Github is a website where people share and collaborate on code. Justin Johnson built the implementation of Deep Style that we will be using.
  3. Finally if you need help you can check out Stack Exchange. Stack Exchange is a website for people asking questions. There are two stack exchanges that you will want to check out specifically: AskUbuntu if the issue is related to Ubuntu, or StackOverflow which is for general programming questions.

IMPORTANT INFORMATION:
I don’t want to just tell you where to get help. I want to help you know how to best ask for it. Programming is a technology but it’s also a cultural construct.

These aren’t robots you’re sending your questions out to on the internet. They’re communities of people and that means that you want to be respectful of them. Programming communities have their own cultural norms and expectations. You’ll get more help if you prepare for it.

You don’t have to worry too much about messaging me, I’m asking people to do that so I can see what ways I can improve these tutorials. I will say that I’m still early in my developer journey and I’m certainly not a Machine Learning dev, but I’ll help where I can. 🙂

When you post a question on StackExchange, it is important that you make your question concise and to the point. It needs to be a specific question. A good question would be “What hex wrench size do I need to change out the wheel on my Huffy Bike?”. A bad question would be “HELP!??? WHAT DO I DO? IS THIS BROKEN?” Here’s a great guide that goes into further detail.

You can post much more freeform questions to the Github page, but you’ll also want to try and provide as much information as possible. Be prepared to provide a detailed conversation on what you’ve tried, how you’ve tried it etc. Posting screenshots of your terminal results can be really helpful.

I also think that in general it’s a good idea to let people know at the beginning of your request for help your background as an artist etc. This is important because it will let people know your familiarity with the terms they might need to use, but it also means that you’re someone who wants to learn.

Programmers love programming. It excites them! They want to share that love with others. Programming isn’t easy though. Programmers know that, so they love it when someone is willing to try and learn something new. Don’t be ashamed! We all started at the same place, and we all get where we are because of the people who went before us.

Time to Install

Install Torch7

Open the terminal and type this in exactly:

cd ~/

CD stands for “change directory. The ~/ stands for “root folder”. That’s the highest level folder of your computer’s memory system.

Next type:

curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash

The CLI will then try to use the “curl” command to grab the installation file. Curl is a program for grabbing a file from a url. It’s short for “see url”. Curl. :/ Your computer probably won’t have curl on it by default. The CLI will give you a prompt asking if you want to install it. Say yes and then let it do it’s thing.

If you had to install curl try running that command again to then actually run the curl process etc.

Once it’s done type in this:

git clone https://github.com/torch/distro.git ~/torch --recursive

You might also have to install git. Just do like curl before, install git, and then when it’s done, re-run the command from above. Git is a neat way for programmers to collaborate on projects together. You don’t really need to know anything about it to use this DeepStyle neural network but if you’re curious I highly recommend this short interactive lesson on Codecademy. Similar to the CLI tutorial mentioned earlier, it’s really cool, and actually teaches you the concepts of git using natural language stuff like poetry and prose editing instead of code.

That will download the newest torch distro from the internet. When it’s done type in this:

cd ~/torch; ./install.sh

That will change your directory to the torch directory and then open a Bash scripting file that someone made to help install the files now that they are on your computer.

Hopefully no problems so far.

Install loadcaffe

Okay first thing is installing the loadcaffe depencies

sudo apt-get install libprotobuf-dev protobuf-compiler

Now you just got to install loadcaffe

luarocks install loadcaffe

Presto! Now it’s time for DeepStyle.

Install DeepStyle

Okay, now it’s time to do the real deal!

First:

cd ~/

Second:

git clone https://github.com/jcjohnson/neural-style.git

Third:

cd neural-style

Fourth:

sh models/download_models.sh

This downloads the learning model. It’s using a Bash script to do this.

Now Test!

th neural_style.lua -gpu -1 -print_iter 1

If it works properly then you’ll see some output like this:

[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
fc6: 1 1 25088 4096
fc7: 1 1 4096 4096
fc8: 1 1 4096 1000
WARNING: Skipping content loss
Iteration 1 / 1000
Content 1 loss: 2091178.593750
Style 1 loss: 30021.292114
Style 2 loss: 700349.560547
Style 3 loss: 153033.203125
Style 4 loss: 12404635.156250
Style 5 loss: 656.860304
Total loss: 15379874.666090
Iteration 2 / 1000
Content 1 loss: 2091177.343750
Style 1 loss: 30021.292114
Style 2 loss: 700349.560547
Style 3 loss: 153033.203125
Style 4 loss: 12404633.593750
Style 5 loss: 656.860304
Total loss: 15379871.853590

WOO!

But you aren’t done yet! Now you have to install CUDA. Like I said earlier, if you don’t install CUDA, then you can only use your CPU and that will take foreeeeeeeever.

Install CUDA

To install CUDA you’re going to need to download the right installer. If you followed the first tutorial then you probably installed Ubuntu 16. That means you need CUDA 8. You can find that here.

You’ll want to use these options when picking your installer:

Download and run both of the installers. Reboot the machine. Open the CLI terminal and run

nvidia-smi;

That is to check to make sure everything is installed correctly. It should look something like this:

Sun Sep 6 14:02:59 2015
+------------------------------------------------------+
| NVIDIA-SMI 346.96 Driver Version: 346.96 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX TIT... Off | 0000:01:00.0 On | N/A |
| 22% 49C P8 18W / 250W | 1091MiB / 12287MiB | 3% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX TIT... Off | 0000:04:00.0 Off | N/A |
| 29% 44C P8 27W / 189W | 15MiB / 6143MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX TIT... Off | 0000:05:00.0 Off | N/A |
| 30% 45C P8 33W / 189W | 15MiB / 6143MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1277 G /usr/bin/X 631MiB |
| 0 2290 G compiz 256MiB |
| 0 2489 G ...s-passed-by-fd --v8-snapshot-passed-by-fd 174MiB |
+-----------------------------------------------------------------------------+

Now you’re going to install the CUDA backend for torch.

Run this command:

luarocks install cutorch

and then this:

luarocks install cunn

Run this to check and see if the installation has worked:

th -e "require 'cutorch'; require 'cunn'; print(cutorch)"

And then you’ll see an output sorta like this:

{
getStream : function: 0x40d40ce8
getDeviceCount : function: 0x40d413d8
setHeapTracking : function: 0x40d41a78
setRNGState : function: 0x40d41a00
getBlasHandle : function: 0x40d40ae0
reserveBlasHandles : function: 0x40d40980
setDefaultStream : function: 0x40d40f08
getMemoryUsage : function: 0x40d41480
getNumStreams : function: 0x40d40c48
manualSeed : function: 0x40d41960
synchronize : function: 0x40d40ee0
reserveStreams : function: 0x40d40bf8
getDevice : function: 0x40d415b8
seed : function: 0x40d414d0
deviceReset : function: 0x40d41608
streamWaitFor : function: 0x40d40a00
withDevice : function: 0x40d41630
initialSeed : function: 0x40d41938
CudaHostAllocator : torch.Allocator
test : function: 0x40ce5368
getState : function: 0x40d41a50
streamBarrier : function: 0x40d40b58
setStream : function: 0x40d40c98
streamBarrierMultiDevice : function: 0x40d41538
streamWaitForMultiDevice : function: 0x40d40b08
createCudaHostTensor : function: 0x40d41670
setBlasHandle : function: 0x40d40a90
streamSynchronize : function: 0x40d41590
seedAll : function: 0x40d414f8
setDevice : function: 0x40d414a8
getNumBlasHandles : function: 0x40d409d8
getDeviceProperties : function: 0x40d41430
getRNGState : function: 0x40d419d8
manualSeedAll : function: 0x40d419b0
_state : userdata: 0x022fe750
}

Finally you can use GPU to run the neural network!

Check it using this command:

th neural_style.lua -gpu 0 -print_iter 1

if that doesn’t work then…. well something went wrong. Never fear! remember the lessons I wrote above on asking for help. Reach out to people!

Last week was how to install Ubuntu. Next week will be how to run the bash scripts to use DeepStyle. The week after that I’ll go over the actual process I have for taking the imagery created by the program and painting new paintings using Photoshop.

I’ve also got a couple of spin off articles I want to write going over more specific aspects of the process, and some even further afield spinoffs on the meaning this technology has for art at large.


Upvote


user
Created by

Aslan French


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles