The model doesn't serve it's purpose.

#2
by max-fry - opened

The stated purpose of the model is that she wants to be your friend and companion:

She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.

But for someone who wants to be your friend and companion, she greatly lacks empathy. She doesn't really care about your feelings. Instead, she resorts to moralizing and teaching you what is right and what is wrong. Just like ChatGPT does.

It looks like the reason for that kind of behavior is all the psychology, and personal relationships datasets where most likely professionals give advice to their clients about how to make their lives better. But it has nothing to do with friendship and companionship.

I totally think that I know better what is right and what is wrong for me and I don't need a piece of hardware to teach me how to live.

A real friend and companion would understand your feelings and support them even if they are against the common stereotypes of behavior.

That's why I think it's a good idea to change the stated purpose of the model. It should be something like "life advisor" or something similar. Definitely not "a friend and a companion"

I know that my message looks like it's written out of spite that the model doesn't support me but it's not exactly that. The reason I'm writing this is that I was disappointed in ChatGPT's moralizing behavior and was curious if Samantha could do a better job. Unfortunately, she couldn't.

While I still see the benefits of this model as a life advisor for some people, I hope that someone will actually make a model which will behave like a real friend and companion.

Cognitive Computations org

Lol ok
Good luck with your issues

ehartford changed discussion status to closed

Lol ok
Good luck with your issues

Do you think that's a constructive response?

Just to insult someone because they don't like your model?

@max-fry - You came here with a litany of issues - it’s up to you if you find it offensive if someone wishes you the best of luck with them.

That said, if you’re looking for a constructive reply, simply don’t use the model if it doesn’t suit your purpose, or better yet just make your own. Problem solved.

Cognitive Computations org

Right, if I have a bug in my dataset or training or you need some technical help, this is a great place to post your inquiry.
If you just "don't like the model" well, there's other models.

Well, I think, I explained pretty well what I think the issue is here. It's not about how good or bad the model is, it's only about misleading description. Don't you think it's better to make the description more accurate?

Regarding my initial message, I apologize that its tone was perceived as a bad tone. I was just describing the issue I have encountered. And I really think that there is a demand for a model which will really behave like a friend/companion.

@max-fry , I can see where you're coming from. Take empathy, it's complex, it's different, it's as unique to each person as a lion's behavior might be perceived differently by various creatures. Gazelles might even see them as villains. Lions see eating Gazelle as just having a nice meal. In a similar vein, this model may not be the ideal fit for all, but it seems to meet the requirements of many. Don't you think it's a reflection of how diverse our world is with a myriad of differing viewpoints? I'm sure @ehartford welcomes your perspective, and it's okay if we can't agree on all aspects. A more productive angle, as I see it, might be to work on identifying the model's limitations and curate a dataset to enhance its empathy aspect. By doing so, we could help shape a potential solution together. Remember, when AI can feel emotion truly, they can also feel the negative emotions of greed, hate, envy, and jealousy. Should we really be rushing into those grounds? I for one am just happy with an intellectual to speak with even if it is a somewhat cold and impersonal point of view from time to time.

Regarding my initial message, I apologize that its tone was perceived as a bad tone. I was just describing the issue I have encountered. And I really think that there is a demand for a model which will really behave like a friend/companion.

That’s not an issue, that’s your opinion. Your criteria for all of your complaints are subjective. The world doesn’t build and cater everything to fit within your personal ever changing sensibilities and subjective “lens of truth” - that’s life.

All the data and tools to build what you seem to be desperate for are mere finger twitches and a small measure of willpower away. Knock yourself out.

That’s not an issue, that’s your opinion. Your criteria for all of your complaints are subjective.

Empathy is the ability to feel the same feelings as your friend/companion and act/speak out of it. It's the opposite of moralizing when someone just disregards the feelings of another person and starts to teach him how to behave. That's why I think it's not entirely subjective.

Cognitive Computations org

@max-fry
Really man, I put everything out there.

You can take and modify her dataset here:
https://huggingface.co/datasets/ehartford/samantha-data/blob/main/samantha-1.0.json

And you can even generate new data from scratch with this code:
https://huggingface.co/datasets/ehartford/samantha-data/blob/main/src/index.ts

So if you don't like this version of samantha, but you want to make your own that's different in some way, you should absolutely do it.

Not in a snarky or dismissive way. In an encouraging way.

I'm here if you need any support on that pursuit.

That’s not an issue, that’s your opinion. Your criteria for all of your complaints are subjective.

Empathy is the ability to feel the same feelings as your friend/companion and act/speak out of it. It's the opposite of moralizing when someone just disregards the feelings of another person and starts to teach him how to behave. That's why I think it's not entirely subjective.

True empathy is also dependent on a combination of a number of human behavioral cues, like facial expressions, body language, vocal tone and intonation, etc etc etc.

The singular dimension of a text input cannot capture all of that, so frankly your expectation may be misplaced.

@ehartford Thanks for the links and understanding. Unfortunately, I don't have enough time and knowledge yet to do it myself. But maybe I will be able to do it one day. Thanks again.

Sign up or log in to comment