Battle of the AI models: Stable Diffusion 2.0 vs Dall-E 2.0

Battle of the AI models: Stable Diffusion 2.0 vs Dall-E 2.0

In my last article, I spoke about the future of generative AI and mentioned about saving the ethical aspects for another day. Today is that day.

Stable Diffusion 2.0 (Released November 24th, 2022) and Dall-E 2 (Released on September 28th, 2022) are two of the most cutting-edge, futuristic AI models that are creating a lot of excitement in the field right now. These two models, however, are from two different companies: Stability AI and OpenAI, respectively.

When two similar technologies are released by competing companies, there has always got to be a difference between them and in this case, it is the approach. OpenAI was founded on December 11th, 2015 by Elon Musk and Sam Altman. Its motto was to democratise AI and make AI-based technologies open-sourced for developers around the globe to contribute towards. 7 years later, OpenAI now SELLS their tech to enterprises. Microsoft recently announced a new product, Microsoft Designer, with Dall-E 2 baked into it.

5 years after OpenAI came Stability AI, with the same motto. Now, it is doing what OpenAI did with a twist. The code to its most recent model Stable Diffusion 2.0, is open-source which means you or I can create our own versions of the technology. You can probably see where this is going already.

So why did OpenAI decide to stay closed source while Stability AI is encouraging and really pushing developers to tinker with their code? There is no clear answer as to which is right, but one thing that's clear is there will be complications with Stability AI's decision. Here's why:

Open-sourced generative AI is an exceptional tool for developers to learn do more with provided code. It is amazing to even think that not just a tech enthusiast like me can learn the potential of generative AI but also a middle schooler who's just getting started with programming and getting their hands on technology.

This means free code translates to faster technological development, right? Not always. Open-sourced AI can have some dark consequences, sprouting from ill-intentions of some. Let me give you an example:

Someone can use generative AI to create Deepfakes of another to threaten them and for personal gains. This is clearly not favourable and nobody should be enabled to do such a thing. At first glance, it almost feels like OpenAI is doing wrong by not letting us change their code and make our own versions of it, but at second thought it's clear that this tech in the wrong hands can be utterly misused.

This shouldn't be a matter of who's right or wrong, but what should be done about the rising cases of technological misuse. If you ask me, we should create regulations and policies to technologies that are vulnerable to misuse.

For instance, I use Google's Colab to create my AI models and programs. However, every time I try to run an open-sourced generative AI, Colab throws an alert warning me about how doing so can cause my account to be terminated. I can't let that happen. Colab gives me free GPUs to train my models on and I'd feel homeless without Colab. So I stop running that model on my Colab and create my own 'lite' version to learn. Similar approaches should be adapted in the matter of AI.

To conclude, I would like to mention that it isn't appropriate to take sides between these companies. Instead respect the technology and understand that both are making tremendous, valuable progress in the field of AI. With that said, we must push to create regulations and policies around known generative AI.

Did you find this article valuable?

Support Shreyas Mocherla by becoming a sponsor. Any amount is appreciated!