How to use this demo?

#2
by Tudouni - opened

I entered api-key, but nothing happened
Does anyone know what this is about?

After you paste your key, you need to click Enter to start Visual ChatGPT~ Thanks

Yes, I click Enter ,but nothing happened

Yes, I click Enter ,but nothing happened

I left it for some time and then it worked.
I think it may take some time to load models here (at least a long time on my local machine).

Where should I get the API from?

Where should I get the API from?

You can get you OpenAI API keys here: https://platform.openai.com/account/api-keys

I have this running locally but seems very dumb.
ofc.png

response: The image you provided is of a group of people having a good time.

How do I train this and make it smarter? I was expecting to ask what is the text? reply would scrape the text from the image. Or ask Find the word "Tools" and return the rectangular coordinates.

is this accesible by api?

Probably but the default use is via the browser accessing a local server port.
I suspect I need to load more models/data. Just need to find which ones. I’m first to just run it from hugging face and see if results are improved. My time has been divided so focus has been limited. But plan to turn attention in couple weeks.

i meant is it possible to access it through the huggingface inference api's ?

Yes I can try that too. I really don’t know what the limitations are for this. I’m hoping I ask what app elements are in the window and can tell the text and find matching text and the UI elements rectangular locations relative to the uploaded png file size.

So far, I'm facing the same issue :(

Did you run from hugging face or local?

on hugging face

Apparently there must be additional data/models to be referenced.
I’m hoping this weekend I can try to get it running as declared in the docs.

Same here, I guess it filled up the backend of the application, maybe the weekend is less intensive.

Sign up or log in to comment