Generative Design with Runwayml
For a few months, I have read and made a few small machine learning projects to understand the space a bit more. One way to keep these experiments quick has been to use RunwayML for some low-level design tasks. Like creating textures, gradients, or iconography with various levels of success. This post is a mini summary of what I have learned and what it may say about the future of generative design tools and workflows as of 2020. For each exploration, I needed to start with a dataset to feed to the computer. In this example, I will be training a StyleGAN model to make gradients.
There are many datasets available (see a list of my favorite below) for things like weather data, satellite images, etc but for generative design data, it has been a little hard. There aren't a lot of repositories with pre-sorted design datasets yet. So the curation process is a little manual at the moment.
First, we need some sample gradients, I started with 25 images from Unicorn Gradients. Typically using 500+ images would be better but since the gradients have such a heavy blur it worked out. Next, upload them to your assets folder on Runway ML.
There is already a lot of templates in RunwayML! So we need to train the model but don't need to create it from scratch. Choose the Image Generator template from the model's section and then selected the files I uploaded earlier.
The process is still slow. The model took 3 hours at 3000 steps. So I would start it before heading to bed.
When your model is complete you will be able to have RunwayML create thousands of new images!
Now that was a quick overview but it shows how the tooling is still being made more non-developer friendly. In 1-2 years more plugins and local first tools should make experimenting and mainstream use easier. Especially for the inspiration of synthesis of screen design. Some other but less successful experiments of mine have been:
Emoji Generator - Training on Twitter's open-source emoji set. The thinking was if it could create something like Emoji Mashup Bot. The results looked more like something from Watchman.
Townscaper Generator - A popular game called Townscaper used waveform collapse functions to perdurably generate grids. I downloaded all the images I could find on Twitter of the game, cleaned them up, and made a StyleGAN model.
Resources for ML: #
- Using Artificial Intelligence to Augment Human Intelligence
- BEING THE MACHINE| Making Home
- curriculum⁄Machine Listening
- Terrain Generation With Deep Learning | Two Minute Papers #208
Public Datasets: #
Runway Models: #
No mentions yet.