ART for the game generated by Neural Net

Hi Prun Devs and those familiar with NN’s,

I happen to be a neural network developer, and i also happen to be working with computer vision in art sector. I am currently professionally exploring GAN’s and Text-to-Image neural nets. I thought that this game could use some art, like massive amounts of art. Generating all of that via conventional pipeline is currently out of reach for developers.

So my suggestion would be to band together with 1 or 2 other NN developers and assemble a few nets out of free and some paid ones available on the market, and spawn art for the game in specific style. No hallucinations. Devs will later on select the ones they like and hopefully implement into the game.
For this we will look through networks that have a license which allows users to use generated content commercially. We could potentially drop it into AWS or Azure computing cloud. More on this later.

Spaceships, Planet surfaces, Solar systems, Star Constellations, Space Stations etc would be the topics for the images.

You dont have to be super experienced with NN’s or Python, but some prior knowledge with either text recognition or image recognition would help. Ability to chip in a few bucks for this would help too.

1 Like

Wow! That would be great!

So here are the services which need to be investigated for licensing, performance and pricing if they are paid. I cannot put more than two URLS in a post, so… multiple posts.

Basic level, Free and semi-free

  • NightCafe (Text to image. has the ability to generate images into Vqgan and Clip-diffusion, can add your own image and generate a style on top of the generated image)
  • Dream (Text to image, can add your own image and generate a style on top of the generated image.)
  • Website containing huge databse of all collaborations Ai generative art tools (Text to image, video to image, pixel arts, text generators, and lots of other stuff.)

Some interesting collabs to consider:

  • Disco Diffusion (Google Colab) (Text to image, text to animation, generation on top of your input image.)
  • Centipede Diffusion (Google Colab) (text to image, inpainting, variations.)
  • Modified rudalle (Google Colab) (text to image, allows variety of styles)
  • Dalle Flow (Google Colab)
    (text to image, variations, larger model than Dalle mini.)
  • Pixel Art Diffusion Google Colab (Text into pixel, text into pixel art animation.)
  • Disco Diffusion Warp Google Colab (Text to animation, Generation on top of your video.)

Finally there are Paid services or with an invitation link:

So i settled for Disco Diffusion for now. Here are the few that were semi-successful imho on the topic of desert planet, with the outdated default settings and very limited knowledge on my side.

Thats it for the first batch of 50. I selected the most adequate ones.
I will be looking into settings now to generate more fine details, and less artifacts.

Inputs of previous batch
Seed used: 503079803
Rest is default

Started tweaking the settings, after reading the documentation and talking to other users of DD.
Here are some results.
I like how i managed to achieve finer details, however i believe i lost some abstraction which was nicer on the previous images. The prompt was tweaked somewhat too. I had to reduce image resolution for now to accelerate generation, while still exploring the settings. My goal is to achieve art style something between Disco Elysium and Guild Wars 2. Abstraction yet with some details. Once that is done i will start generating other motives. For now Desert Planet is the baseline.

Settings of this run
I forgot to save the seed number >.< well, consider these to be NFT’s now lol.

    "text_prompts": {
        "0": [
            "A beautiful painting of sci-fi starship, standing in the desert landscape, by greg rutkowski and thomas kinkade, Trending on artstation.",
            "brown color scheme"
        "100": [
            "Dust storm is approaching in the horizon",
            "Heavy winds"
    "image_prompts": {},
    "clip_guidance_scale": 10000,
    "tv_scale": 0,
    "range_scale": 150,
    "sat_scale": 0,
    "cutn_batches": 8,
    "max_frames": 10000,
    "interp_spline": "Linear",
    "init_image": null,
    "init_scale": 1000,
    "skip_steps": 10,
    "frames_scale": 1500,
    "frames_skip_steps": "60%",
    "perlin_init": false,
    "perlin_mode": "mixed",
    "skip_augs": false,
    "randomize_class": true,
    "clip_denoised": false,
    "clamp_grad": true,
    "clamp_max": 0.05,
    "seed": 1031775764,
    "fuzzy_prompt": false,
    "rand_mag": 0.05,
    "eta": 0.8,
    "width": 768,
    "height": 512,
    "diffusion_model": "512x512_diffusion_uncond_finetune_008100",
    "use_secondary_model": true,
    "steps": 500,
    "diffusion_steps": 1000,
    "diffusion_sampling_mode": "ddim",
    "ViTB32": true,
    "ViTB16": true,
    "ViTL14": false,
    "ViTL14_336px": false,
    "RN101": false,
    "RN50": true,
    "RN50x4": false,
    "RN50x16": false,
    "RN50x64": false,
    "cut_overview": "[12]*400+[4]*600",
    "cut_innercut": "[4]*400+[12]*600",
    "cut_ic_pow": 1,
    "cut_icgray_p": "[0.2]*400+[0]*600",
    "key_frames": true,
    "angle": "0:(0)",
    "zoom": "0: (1), 10: (1.05)",
    "translation_x": "0: (0)",
    "translation_y": "0: (0)",
    "translation_z": "0: (10.0)",
    "rotation_3d_x": "0: (0)",
    "rotation_3d_y": "0: (0)",
    "rotation_3d_z": "0: (0)",
    "midas_depth_model": "dpt_large",
    "midas_weight": 0.3,
    "near_plane": 200,
    "far_plane": 10000,
    "fov": 40,
    "padding_mode": "border",
    "sampling_mode": "bicubic",
    "video_init_path": "/content/drive/MyDrive/init.mp4",
    "extract_nth_frame": 2,
    "video_init_seed_continuity": false,
    "turbo_mode": false,
    "turbo_steps": "3",
    "turbo_preroll": 10,
    "use_horizontal_symmetry": false,
    "use_vertical_symmetry": false,
    "transformation_percent": [
    "video_init_steps": 100,
    "video_init_clip_guidance_scale": 1000,
    "video_init_tv_scale": 0.1,
    "video_init_range_scale": 150,
    "video_init_sat_scale": 300,
    "video_init_cutn_batches": 4,
    "video_init_skip_steps": 50,
    "video_init_frames_scale": 15000,
    "video_init_frames_skip_steps": "70%",
    "video_init_flow_warp": true,
    "video_init_flow_blend": 0.999,
    "video_init_check_consistency": false,
    "video_init_blend_mode": "optical flow"

These look great, and I think it could be a good marketing opportunity if PrUn is the first commercially produced game which uses AI procedurally generated art - there could be interest from the press and lots of free publicity, so I certainly hope that the devs consider this.

However, the main problem as I see it is that each image still needs manual review to pick out the best one from the generated candidate images, so it seems that this is not a “fire and forget” type process where we could easily generate a planet surface image and a solar system view for every location in PrUn. There are thousands of planets in the game, which makes manually allocating each image infeasible, so we would need to come up with a way around that.

Also, all the images so far are kind of similar. I guess that’s because they all come from the same text input, but this makes me wonder how much variation there would be for planet images generated from current game data. At the moment the only adjectives we have for the planets are “Rocky” or “Gaseous”. I guess you could try adding in “Low pressure” and “Iron ore” and see what the AI spits out. What would be interesting is if it interprets the names too. What does a neural net think of when it sees the word “Promitor”?

Yes manual review is needed, because algorhitms are far from perfect. However selecting three out of 10 once a day is not an issue. Covering a thousand, is actually not hard, as long as you split it into topics, such as “Rocky Planets”, “Space Stations”, “Cargo deck”, “Factories”, “Unloading Spaceship” etc. If someone can sit down and do such breakdown, it will help the project.

Images all similar because the prompt is the same. I tweak it a bit, but i still need the baseline to understand which settings i need to set in stone, and which still require tweaking.

Regarding lack of adjectives in game, its a problem of the game, so we can only come up with our own and hope devs will accept some of them into the galaxy.

Unknown words will most likely end up in hallucinations. I dont really want to try them yet, need to have a baseline first.