Support

Becoming an OpenSea NFT artist through training StyleGAN2 using Z by HP equipment

Spring 2021

Vision AI Research Development

As a Z by HP Data Science Global Ambassador, Chanran Kim's content is sponsored and he has been provided with HP products.

1. Introduction

Contemporary artists are using AI in their work to discover new creativity and originality. NVIDIA introduces artists and works delivered through The AI Art Gallery.

 

GTC Apr 2021: Digital AI Art Gallery

Artists and musicians are tapping into AI to uncover unexpected creativity and originality in their work. They use AI.

 

In particular, sales of works using NFT are becoming an issue these days. Wouldn’t it be a great experience to create works using AI and even sell works using NFT? In this post, I’m going to cover both. I would like to look at the process of creating art using StyleGAN2 and posting and selling works through OpenSea’s NFT art market.

 

It takes powerful equipment to train a model that produces high-resolution images such as 1024. The HP Z4 proves its performance.

2. Brief description of NFT and OpenSea

NFT stands for ‘non-fungible token’ and is translated as ‘non-fungible token’ or ‘non-fungible token’.

 

Each of the fungible tokens has the same value and function. They can be exchanged with each other, and when 1:1 exchange of the same unit occurs, it is virtually no different. These include fiat currencies, common cryptocurrencies such as Bitcoin and Ether, precious metals, and bonds.

 

On the other hand, non-fungible tokens each have their own uniqueness. Since the issuer, flight, and seat location are all specified, it is similar to a ticket in which the same product cannot exist at all. NFT is guaranteed uniqueness by permanently leaving encrypted transaction details on the blockchain. It differs from the traditional method in which uniqueness is guaranteed by obtaining random certification from a specific individual or institution. It attracted attention that it is a technology that can be used to issue “unique ownership” even for “digital files” that can be copied by anyone.

 

At Christie’s Auction on March 11 (local time), Mike Winkelmann’s JPG work ‘Everydays: the First 5000 Days’, called ‘Beeple’, was worth $69.3 million (It was a successful bid for about KRW 78.5 billion).

 

Twitter co-founder Jack Dorsey’s first tweet hit an auction of $2.5 million (about KRW 2.7 billion).

 

MIT Technology Review (Article in Korean only)

 

OpenSea is the first decentralized, peer-to-peer marketplace for blockchain-based assets, which include crypto collectibles, gaming items, and other assets backed by a blockchain. The OpenSea team has backgrounds from Stanford, Palantir, and Google, and is funded by Y Combinator, Founders Fund, Coinbase Ventures, 1Confirmation, and Blockchain Capital. OpenSea is currently the largest general marketplace for user-owned digital items, with the broadest set of categories (90 and growing), the most items (over 1 million), and the best prices for new categories of items. Since they launched their beta back in December, the site has had over 18,000 ETH pass through their market.

3. StyleGAN2 

The artificial intelligence generative model learns to create an image through a latent vector. Among them, style transfer is a method that can apply manipulations such as transforming an existing image into a new style or transforming it through adjustment in a latent vector. StyleGAN2 is the latest technique and performs quite well.

Abstract: The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent vectors to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably detect if an image is generated by a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.

Requirements

● Both Linux and Windows are supported. Linux is recommended for performance and compatibility reasons.

● 64-bit Python 3.6 installation. We recommend Anaconda3 with NumPy 1.14.3 or newer.

● We recommend TensorFlow 1.14, which we used for all experiments in the paper, but TensorFlow 1.15 is also supported on Linux. TensorFlow 2.x is not supported.

● On Windows you need to use TensorFlow 1.14, as the standard 1.15 installation does not include necessary C++ headers. 

● One or more high-end NVIDIA GPUs, NVIDIA drivers, CUDA 10.0 toolkit and cuDNN 7.5. To reproduce the results reported in the paper, you need an NVIDIA GPU with at least 16 GB of DRAM.

● Docker users: use the provided Dockerfile to build an image with the required library dependencies.

 

I want to use ADA, a slightly more advanced form of StyleGAN2. As data augmentation is used, learning is effective for relatively small datasets. Download the code using the following git clone command.

 

git clone: https://github.com/NVlabs/stylegan2-ada-pytorch

 

Assume that the folder where the images to be learned are collected is ‘images’. Collecting at least a few thousand copies is good for training.

 

We create a dataset with a size of 256, which has proven to be well trained. It can be up to 1024, but you can’t expect overwhelming performance other than aligned faces. In addition, since various tasks such as parameter adjustment may be required, it is advisable to try it after understanding StyleGAN2.

 

python dataset_tool.py --source=./images --dest=./data/stylegan2-dataset.zip --width=256 --height=256

 

The command to perform learning using the created dataset is as follows.

 

python train.py --outdir=runs --data=./data/stylegan2-dataset.zip --gpus=1 --aug=ada --target=0.7

With the HP Z4 workstationA, it can be seen that the batch_size can be used more than the default for 2 GPUs. This enables faster and more stable learning.

 

Initially, it creates an unknown image in the same state as init. Some people may think that this is beautiful in itself.

After learning about 1400 times, you can see that a style of style appears to some extent as follows. It even draws a frame. Some things are still incomplete, so additional learning may be required. These images can be created directly from latent vector, and style can be changed through any input image.

Using the trained model, you can create it with the following command. Seed refers to a random value, and various types can be generated through such a value.

 

python generate.py --outdir=out --trunc=1 --seeds=85,265,297,849 --network=./runs/00000-stylegan2-dataset-auto2/network-snapshot-001400.pkl

 

The value of ‘0000’ in ‘00000-stylegan2-dataset-auto2’ increases according to the number of times you try to learn. The value of ‘001400’ in ‘network-snapshot-001400.pkl’ increases according to the learned amount. Note that if you do not enter the file path that suits you, an error will occur.

4. OpenSea

MetaMask is required to sign up and log in to OpenSea. All transactions are made through the corresponding Ethereum wallet. If you don’t have it, you have to make it.

 

 

Then go into OpenSea and press the create button. This will help you log in with MetaMask.

 

 

You can create my collection in create> my collection, add new time in the collection, and upload the generate image above, and the artwork will be released through opensea through the image created by the AI you learned. Of course, rather than using the naively created one as it is, it is better if there is an additional touch that can express your identity here.

5. Conclusion

Through the above experiment, it was shown that a large amount of generated images requires a lot of memory. You need powerful equipment for fast and stable learning, and the Z4 is very effective.

 

You can check out my OpenSea collection in detail here.

I hope you will also start creating artwork using AI and will be interested in entering the art market using NFT through OpenSea.

Have a Question?
Contact Sales Support. 

Follow Z by HP on Social Media

Instagram

X

YouTube

LinkedIn

Facebook

Monday - Friday

7:00am - 7:30pm (CST) 

Enterprise Sales Support

1-866-625-0242 

Small Business Sales Support

1-866-625-0761

Monday - Friday

7:00am - 7:00pm (CST) 

Government Sales Support 

Federal

1-800-727-5472

State and local 

1-800-727-5472

Go to Site 

Monday - Friday

7:00am - 7:00pm (CST) 

Education Sales Support 

K-12 Education

1-800-727-5472

Higher Education

1-800-727-5472

Go to Site  

Monday - Sunday

9:00am - 11:00pm (CST) 

Chat with a Z by HP Live Expert

Click on the Chat to Start

 Need Support for Your Z Workstation? 

Go to Support Page

Disclaimers

  1. Product may differ from images depicted.

     

    The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.