The LLaMA Leak & The ControlNet Revolution

Good Morning!

This is AiTuts, the Papa John’s of AI newsletters: Better ingredients, Better email.

What we we’re serving up today:

  • The ControlNet Revolution: Why do Stable Diffusion users love ControlNet so much?
  • The LLaMA Leak Saga: An unlikely hero, and how to use LLaMA
  • Bits and Bytes: Teachers hate ChatGPT…just kidding, Japan’s first AI-generated manga, why AI won’t cause unemployment

The ControlNet Revolution

Released just 3 weeks ago, ControlNet has completely blown up. It’s all people have been talking about on the Stable Diffusion subreddit for the past week.

So what’s the big deal?

If you’ve tried Stable Diffusion or Midjourney, you’ll know you don’t actually have that much control over your generations.

The process kind of looks like:

  1. Try prompt
  2. Repeat 50x until you get something you like

We at AiTuts call it the Prompt and Pray™ method.

ControlNet on the other hand lets you control different parts of the image generation process to get exactly what you want.

Different ControlNet models let you control different parts of the image.

For example, the OpenPose model lets you specify the exact pose/s that you want for your characters:

Submit a poser and generate an image based off of it

The depth map model let’s you specify how far everything is from the camera:

Depth-map keeps depth from camera the same, ensuring consistent environments

The coolest part?

When you start using multiple ControlNet models at the same time, you start being able to control every aspect of the generation.

Which is really a massive step towards an Image Generation toolkit that lets us generate exactly what we’re thinking of.

A massive step towards creating films, video games and other media to our exact specifications with AI.

A massive step towards remaking that awful final season of Game of Thrones at home.

You can try it right now: [get started with ControlNet]

 Cool Things to Try

Modify a video with ControlNet - Redditor shares his process for using ControlNet & EBSynth to modify an existing video to super cool effect. Try it

Make processing images for ControlNet - ControlNet relies on processing images that you supply to control the final result. That’s the secret sauce. You’ll learn what that means in our guide. Here’s a cool way to use the Clickdrop API to make processing images: Try it

Stable Diffusion in Photoshop -AbdullahAlfaraj is back at it. He’s dropped a big update to his massively popular Photoshop/Stable Diffusion plugin. We expect all artists to eventually download this one after they stop hatin’: Try it

The LLaMa Leak Saga

Last week, Meta entered the AI language model arms race by announcing LLaMa, a 65-billion-parameter large language model.

Anybody could apply on Meta’s website but few were given access (.edu emails helped a lot, apparently).

At 11:12 AM EST on Friday, user ‘llamanon’ leaked the model on 4chan’s technology board.

As of today, it has been torrented by everybody and their grandmothers.

An internet troll attempted to add the torrent link to the official LLaMa Github.

llamanon has since dismissed concerns from posters about his well being:

llamanon. The hero Gotham needs, not the one it deserves?

Whilst LLaMA was intended to be open-source from the very start, llamanon’s actions have accelerated the speed at which people have been able to get their hands on it x1000.

Here’s what you need to know about LLaMA:

There are four different pre-trained LLaMA models, with 7 Billion, 13B, 30B, and 65B parameters respectively.

The 13B model outperforms GPT-3 (175B) in most benchmarks, and the 65B model is on-parr with the best models in the world, such as Google's PaLM-540B.

Unfortunately, only the smallest of the four models (7B params) can run on a normal high-end consumer GPU today.

Keep in mind that unlike ChatGPT, these models have not been fine-tuned for chat functionality and might not produce the results you expect:

[Here’s how to run the 7B model.]

Side note: at AiTuts we don’t encourage breaking the rules, ever.

Whoever llamanon is, their actions have had a big impact on the future of AI. But they know that already.

 Bits and Bytes

Teachers HATE ChatGPT: Just kidding. A study by Morning Brew shows college kids are using ChatGPT much less than the media thinks. 40% have never heard of it, and 30% have heard of it but never tried it.

Japan’s first AI-generated manga: A 37-year old with “absolutely zero drawing talent” uses Midjourney to publish a dystopian manga.

Stable Diffusion generates images from brain waves: Osaka University researchers use Stable Diffusion to construct images from subjects’ fMRI brain activity.

Why AI won’t cause unemployment: YCombinator founder Marc Andressen has a message for both haters and enthusiasts.

Microsoft Bing’s Chatbot is allowed to be fun again: Microsoft pulled the reins in after Bing AI’s rude responses went viral. Now it is slowly letting Bing AI be more creative.

 Startup Roundup

Character.ai, a company that lets you talk to ChatBots modeled after various characters — from Batman to Billie Eilish — is valued at ~$1B after its latest funding round.

Inflection AI, a startup working on a personal assistant for the web could raise up to $675m, after raising $225m in 2022.

BasedAI? — Elon Musk, a vocal proponent of the dangers of AI to humanity, has been recruiting a team to build an AI and chatbot that is “anti-woke.”

 That's a Wrap!

If you have any comments or feedback, just respond to this email!

Till next time,
Yubin