Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
KingNishΒ 
posted an update May 15
Post
2914
Something is wrong with GPT-4o

Today, I gained access to GPT-4o, so I thought to test it. However, I encountered several problems, such as When I requested image generation, it did not create any images but only provided links, which are also incorrect. πŸ˜₯ [Image 1]

Subsequently, I considered that my prompt might be incorrect, I attempted once more with a prompt from OpenAI's examples, but it also did not work. πŸ˜₯ [Image 2]

Then, I tested its logical reasoning skills, which it failed. I presented a question that an 8b model solved with ease, but GPT-4o could not. πŸ˜₯ [Image 3]

I also attempted to generate an image from another image, but this too was unsuccessful. [image 4]

Nonetheless, it excels in tasks such as image classification and voice chat.

If you've experienced similar issues, please share them here.

its gpt-4o not gpt4

Β·

Thank you for improving me.

Hi,
I think image generation is only available to Plus subscribers. I'm on the Free plan, so I'm experiencing similar issues. It will generate links unless you're a subscriber.

Β·

okk, thanks

Hello KingNish, like as mrfakename said, I confirm that you need a subscriber account to create images with GTP-4o

As their blog states, as of right now GPT-4o only has available text and message input, and text output, thats all, they explained on their blog post. The generation on plus is actually dalle from what IK, and if you check their API the GPT-4o endpoint only allows text and image input, and text output.

Voice is still the old pipeline, they are still red teaming it.

It constantly gets stuck in a copying link where it just copies me but with good grammar

To those who said that for certain features you need to be a subscriber, I'd like to remind that according to official presentation which included a very impressive, even hype inducing demo showing all of the features in just a small form of a mobile app, the "o" in the name of the model actually means "omni" which is a hint that the model is a multimodal one capable of doing it all at once much faster than standard ChatGPT 4 which is supposedly why they are able to deliver that experience to free users, that was the official statement. In fact, go ahead and watch the presentation of the model to hear it from the OpenAI team itself.

I'm not saying that this is what it actually is, only that this is the way they originally presented it and if they failed to deliver on that promise, that's a whole different matter worth it's own individual review and analysis.