Discussion about dataset removal

#12
by tobiasdrundridge - opened

In light of the change removing the dataset, I wanted to comment that I think it is the correct decision. While ATProto is open to all, and while anyone could and still can obtain/distribute similar data, there is a difference between something being possible and something being the right thing to do.

My objection isn't related to legality or copyright. It simply leaves a bad taste in my mouth to be treated like a resource.

agreed; this data collection should have been opt-in from the start, and it's very disappointing to see that approach wasn't considered from the very beginning.

there is already an incredibly negative perception around AI training, and efforts like this only serve to deepen the wound.

Bluesky Community org

It's quite late/early for me here so plan to go back to bed soon! I just wanted to say that I also think I went about this in the wrong way and this could have been done better from the start. I think the approach suggested here of allowing people to decide could be very valuable and allow people to create datasets that can help improve Bluesky without requiring everyone to sign up to that.

I'm going to be writing more about this tomorrow, but there is anthropological value in such an endeavor as you are doing; however there's a reason that US census data is redacted for 50 years before the full raw data (including home mailing addresses and personally identifiable information) is made public. Doing it as it has been done here ups the ick factor into the stratosphere.

I hope people at Hugging Face reads every single negative post from Bluesky users about the dataset and use it as a learning experience. AI companies and AI developers might be all excited AI, but the many people outside the tech world are pissed about AI companies taking their content without their consent.

the unfortunate circumstance is that too many tech companies have gotten far too comfortable with their ability to vacuum massive swaths of data from unsuspecting users without any repercussions. legislation has not caught up (and will likely be unable to for some time, due to the unfamiliar territory) and any action taken is currently reactive instead of proactive. social push-back also does little-to-nothing to stop corporations, so long as the money keeps coming in.

it is good to see that community feedback is at least being listened to here, but that cannot be relied upon in most cases. The Powers That Be would need to crack down hard on AI training and usage, whatever form that may take.

"discussion of how datasets can be used to help improve Bluesky and allow people to build the tools they need to build their own open models and approaches to creating feeds that work for their needs."

I am shocked at the tone used here - this is so condescending. Did you get an approval from each individual user in the dataset that you, or anybody else, can use it for such purposes? What if one of the users does not like their posts to be used to improve Bluesky?

Huggingface does not do the same for Disney contents or New York Times contents. Why do you think you can do it for data on Bluesky?

At the end of the day, any publicly accessible website is being archived and used in datasets (and also for targeted ads, to "improve" said website, for AI, and anything in between) no matter if you like it or not. The difference here is that it would have helped open source research instead of opaque closed-source ones.

At the end of the day, any publicly accessible website is being archived and used in datasets (and also for targeted ads, to "improve" said website, for AI, and anything in between) no matter if you like it or not. The difference here is that it would have helped open source research instead of opaque closed-source ones.

oh well as long as everyone's doing it that's okay

@SerialKicked Those websites have terms and conditions to use the user data within their website. Did Bluesky users accept terms and conditions that third party like Hugging Face to archive and use their data? I think Bluesky terms specify that the users are the owner of their contents.

"Huggingface does not do the same for Disney contents or New York Times contents. Why do you think you can do it for data on Bluesky?"

Seriously? You are comparing content from journalists with simple social Media posts?

Guys, it is just simple social media posts, not real or "pirated" books, no one is harmed when simple social media posts are transformed into a nice dataset.

In my opinion, Daniel did nothing wrong here, but I guess the Bluesky users do know better.

🙄

I am glad that the internet was invented some years ago and not now. Today it would be named something like "Snowflakernet".

After reading this weird post from Bluesky account:

A number of artists and creators have made their home on Bluesky, and we hear their concerns with other platforms training on their data. We do not use any of your content to train generative AI, and have no intention of doing so.

So in German we have this nice term "Schöpfungshöhe" (it can probably translated to "Threshold of originality"). I don't see any example of how a social media Bluesky post can achieve this level of creation. If you would e.g. upload the latest Stephen King book as plaintext, the situation would be totally different than collecting publicly available Bluesky posts.

@stefan-it the threshold of originality doesn't apply here. the bottom line is that these posts are the legal property of their posters, and no one on huggingface or elsewhere has the right to include them in AI training datasets without the poster's explicit consent. from Wikipedia's article on the GDPR:

No personal data may be processed unless this processing is done under one of the six lawful bases specified by the regulation (consent, contract, public task, vital interest, legitimate interest or legal requirement). When the processing is based on consent the data subject has the right to revoke it at any time.

daniel's inclusion of these bluesky posts does not fall under any of the lawful bases. no user had given him prior consent or had entered into a contract with him, this usage doesn't fall under a vital interest or public task, and this can't be considered a legitimate interest because it infringes the property rights of the users.

I disagree with what seems to be the majority view here, that putting the data set up was bad, and taking it down was good. This is, as far as I understand it, material that people have chosen to publish. None of it was private. We should encourage people to make and share archives of published materials. This is how civilisation progresses.

Bluesky is an open platform. That's the beauty of it. It's free for all. Be it a giant tech company or a small time player, all have access to bluesky(atproto)'s data. Even you, can get the data yourself. If you want to see your profile's data or anyone's profile's data, here's a project page for you: https://atproto-browser.vercel.app/

Do you all not understand the meaning of an open platform? Whatever images/text you put on bluesky the app which communicates with the atproto, all of that data is easily accessible by anyone.

I don't even understand how would they even implement a permission system without changing the entire architecture of the protocol itself. They have been building it for 2+ years now on this design decision of an open architecture.

It's an open network, I don't need to ask you permission to use your data or you don't need to ask permission to use my data.

This was bound to happen. What did you expect?

Be real.

@maxfzz GDPR applies to personal data, so it comes down to whether social media posts count as personal data. My understanding is that it's "personal" if you can identify a specific individual from the data, so the text of the posts themselves probably isn't Personal Data in the GDPR sense but the other API fields identifying the author of each post or any @-mentions of other users would fall under GDPR. Possibly if the posts were pseudonimized but even then it's not black and white.

Another key difference is that social media platforms are required by GDPR to have a way to delete everything you ever did. On HF, such options are limited and often unavailable, as all report features go directly to the dataset author rather than a content moderation team.

It's an open network, I don't need to ask you permission to use your data or you don't need to ask permission to use my data.

Yes you do. That's the whole point of the GDPR.

@DataDrivenNews :

We should encourage people to make and share archives of published materials. This is how civilisation progresses.

This is a conversation about data ethics, not the value of archives. Perhaps we should educate people about archival and seek their consent so we can then make and share such archives. I could agree with that. I don't understand your position, because there is no way to ethically advocate for a non-permitted scraping and sharing of the data.

Do you all not understand the meaning of an open platform? Whatever images/text you put on bluesky the app which communicates with the atproto, all of that data is easily accessible by anyone.

It's an open network, I don't need to ask you permission to use your data or you don't need to ask permission to use my data.

This was bound to happen. What did you expect?

Be real.

Just because you are able to take it and do as you please, doesn't absolve you of the ethical considerations and - perhaps - legal ramifications (perhaps very little, just a TOS violation, or perhaps more: I'm neutral on that subject).

Would you try to take someone's wallet out of their pocket because you can tell it's there and they won't stop you? And further, would you use it for yourself?

Ethics.

edit: I'll be the first to admit, there's some irony that I'm talking about ethics to a "JohnDoe" + random number new account.

There seem to be some misunderstandings in this thread.

First, GDPR only applies to personal data, which mostly means data that can be used to identify a person (think pictures of them, posts they wrote about their families, etc), and it only protects EU citizens.

Second, and most importantly, by accepting Bluesky's ToS you granted them a license that allows them to:

"Modify or otherwise utilize User Content in any media. This includes reproducing, preparing derivative works, distributing, performing, and displaying your User Content",

as well as grant these rights to others, which it seems they have granted to any developer that uses data from Bluesky.

Based on the complaints I've seen, I think the discussion should be more focused on whether the license granted by the ToS should have a restriction prohibiting use for machine learning.

As long as it doesn't have an exception like that, this will be inevitable. And companies don't even need to tell us about it (other than ppl who have shared personal data and are protected by the GDPR).

it's difficult to have an objective conversation around the benefits and harms of current-generation AI; this thread alone demonstrates that different people draw the line at different places (for the record, that's totally okay; you do you).

for the past few years, many people have decried harmful and unethical uses of AI which have, for the most part, gone completely unmoderated. it is not unreasonable that such people would not want to be in any way involved in the development of AI models, and that includes the use of their data, public or otherwise. one is not a "snowflake" for asking to not be involved in the development of harmful technology.

there may not be sufficient legislation preventing organisations or individuals from collecting public user data for AI training, but i think we can all agree that it's the data collector's responsibility to respect people's choices if they are to call what they do "research".

Second, and most importantly, by accepting Bluesky's ToS you granted them a license that allows them to:

"Modify or otherwise utilize User Content in any media. This includes reproducing, preparing derivative works, distributing, performing, and displaying your User Content",

as well as grant these rights to others, which it seems they have granted to any developer that uses data from Bluesky.

No. Wring interpretations. Hugging Face is NOT Bluesky.

Terms say:

  • "When we say “Bluesky,” “we,” “us,” and “our” in these Terms, we mean Bluesky, PBC."
  • 2.D. "By sharing User Content through Bluesky Social, you grant us permission to (...)"

To me, Bluesky is more like an RSS reader as well as web service host. And what Hugging Face did is equivalent to scraping all copyrighted texts from individual websites served under RSS without getting consent and commodifying the data.

Bluesky terms made it clear that the users are the owners of the data (even though Bluesky can use the data for limited purposes as a service provider). For the third party usage like Hugging Face, Bluesky nor the protocol inventor does not even have right to allow the usage. You need to get consent from the post creators - Bluesky users.

perhaps a nitpick, but i find the rationale behind this project to be quite confusing as well (emphasis mine):

I will leave the dataset repository up to allow room for discussion of how datasets can be used to help improve Bluesky

are we aware of any applications where this kind of data scraping could actually benefit bluesky and its users? the only utility i can see here is making minor improvements to LLMs with a bit more training data (though as others in this thread have pointed out, social media posts don't necessarily represent "quality" data).

the motivations behind this kind of data collection are incredibly important, especially if you're scraping people's work without their consent. you can't hide behind "the benefit of humanity" if you can't actually describe a good use-case.

the script used to collect the data (albeit with tweaks to filter the scraping to consenting users), could be extremely useful for small researchers looking to conduct wide social media studies. a possible benefit to humanity depending on the research, but this would still have nothing to do with AI training.

Removing the dataset makes no sense.

The data is public by BlueSky design, anyone can collect all posts and likes in real time with not much effort. If someone doesn't like BlueSky design, they can go to Instagram and enjoy the walled garden with anti-scraping measures.

If anything, the data should be more available to make it easier to develop better algorithms for the feed, as the default one is just broken.

The data is public by BlueSky design, anyone can collect all posts and likes in real time with not much effort.

this kind of messaging still completely ignores "just because you can, doesn't mean you should". if you want to scrape hundreds of thousands of unsuspecting users' data, you should expect to get valid push-back in return. bluesky's users, by and large, do not want this.

If someone doesn't like BlueSky design, they can go to Instagram and enjoy the walled garden with anti-scraping measures.

this is incredibly unfair to bluesky users who left their original platforms to escape their extremely toxic environments; one of those points of toxicity is their data being used for AI training without their consent. instagram is also absolutely not an improvement. if anything, the situation is worse over there than on bluesky.

If anything, the data should be more available to make it easier to develop better algorithms for the feed, as the default one is just broken.

i fail to understand how this kind of dataset could be used to improve algorithms. the raw bluesky firehose is unfiltered, and not reflective of any user-facing bluesky feed. i believe there are much better ways to improve the algorithms that are used on bluesky that do not involve non-consensual user data scraping.

The data is public by BlueSky design, anyone can collect all posts and likes in real time with not much effort.

Would you go out into a public space and record audio of numerous peoples' conversations without their consent just because it's a "public" space? The same concept applies here. Just because it is "public" and easily accessible data does not mean it should be perfectly fine, especially without ANY anonymization efforts, to gather all of this data into a dataset without the consent of the authors of any posts included.

@arimelody

if you want to scrape hundreds of thousands of unsuspecting users' data, you should expect to get valid push-back in return.

That's my whole point, if you don't like Bluesky design, it's more productive to go to a walled garden. It's pointless to send death threats to the dataset creator, as anybody can create a similar dataset in a few hours.

bluesky's users, by and large, do not want this.

Small angry mob does not represent everybody. If large portion of Bluesky users actually think they are on a closed platform, seems like Bluesky will just fail as a project.

i fail to understand how this kind of dataset could be used to improve algorithms.

By analyzing posts content or likes, it's possible to design an 'algorithm' that shows more relevant posts. All modern feed algorithms use machine learning ("AI") and interaction data to show relevant content to you.

Would you go out into a public space

@Reapimus Bluesky is not a public space, it's an open social network. Expecting privacy here, you are just fooling yourself and then get pointlessly angry at dataset creators. You can get more privacy on closed platforms that put a lot of effort into siloing the data, access controls, and resisting scraping.

Expecting privacy here

Nobody is expecting privacy. They expect that their wishes for their posts not to be included in datasets used to train AI models be respected and any ethical dataset publisher absolutely should be getting consent from users before including their posts in a dataset.

@Reapimus Just look at real outcomes, not what you wish for. Anyone can easily download Bluesky data, which means it will be used without any restrictions. If you actually care about your data not being used to train AI models, you should use other social networks that make it harder to get your data, e.g. by not showing it to unauthorized users. All data on Bluesky will be used to train AI models, unless they pull the rug on openness.

Multiple people from Hugging Face including CEO Clem Delangue, Machine Learning Librarian Daniel van Strien, and Principal Ethicist Giada Pistilli have said it is a mistake to upload Bluesky dataset to Hugging Face.

Clem: https://bsky.app/profile/clem.hf.co/post/3lbvlyphqd22r
Daniel: https://bsky.app/profile/danielvanstrien.bsky.social/post/3lbvih4luvk23
Giada: https://bsky.app/profile/giada.bsky.social/post/3lbwfa6udf22c

@python273 I agree that it is almost impossible to track AI data pirates.

I am more appalled by the fact that AI researchers here (who think themselves and Hugging Face are on the good side) think that they can just scrape it, process and freeze it, upload it, and parade it in front of the very users, not expecting that they will create a repercussion like this. And they are still having an open channel to discuss how to use this dataset (I mean, right here).

This case will definitely be cited when shaping restrictions on AI in the future. This incident can damage the growth of Bluesky and make Bluesky more closed and hostile to AI researchers. This incident also tainted the good reputation of Hugging Face too. EU could also apply harsher rules to Bluesky regarding user protections in the future.

I don't think preaching normal people about the reality is helping to make the future better for the AI community.

It is very clear from this discussion that most people have no idea how datasets are created or used.

  1. If it is public (and most private info), it is in a dataset.
  2. If a human can read it for free, so can a machine. This is fair use.
  3. If you want this to change, don't make public posts.

This dataset was posted for free, I guarantee there are DOZENS of others with many more posts that you don't know about.
It's like everyone learned nothing from Facebook's use of data and are just now figuring out how the internet works.

The data is public by BlueSky design, anyone can collect all posts and likes in real time with not much effort.

Would you go out into a public space and record audio of numerous peoples' conversations without their consent just because it's a "public" space? The same concept applies here. Just because it is "public" and easily accessible data does not mean it should be perfectly fine, especially without ANY anonymization efforts, to gather all of this data into a dataset without the consent of the authors of any posts included.

Yes, this happens all the time. Security camera footage records conversations. Anything said in public is fair game.
Also, any conversation not on an FCC regulated service (phone call), is very likely recorded.
Any time you hear "for quality assurance", know that means training an AI.

@mechtronicman Anybody can READ it, but that does not mean that anybody can USE it in a way whatever they want to do. What do you think about all the copyright footers in most of the websites? What about all the Creative Commons licenses?

@mechtronicman

Any time you hear "for quality assurance", know that means training an AI.

Do you think people do not know that? That phrase is a way to protect those companies from legal responsibilities. X's recent terms and conditions update also caused massive users to move from X to Bluesky.

Bluesky's terms and conditions do NOT have a phrase that a third party like Hugging Face can use user data for whatever purposes.

AI Data Pirates? Seriously? Damage the growth of Bluesky? Holy fish.

You guys are making a mountain out of a molehill.

Also I don't quite understand what Giada means with "if Bluesky users feel unsafe". If you feel unsafe that others will read and maybe process your trumpeted out posts on social media, then you maybe should stop posting them... But just one idea.

  1. If a human can read it for free, so can a machine. This is fair use.
  2. If you want this to change, don't make public posts.

@mechtronicman no, that is not how fair use works. further, the behaviour of AI scrapers will not change because people suddenly decide to make private posts.

This dataset was posted for free, I guarantee there are DOZENS of others with many more posts that you don't know about.
It's like everyone learned nothing from Facebook's use of data and are just now figuring out how the internet works.

"everyone else does it, so it must be okay"- this really should not bear repeating. nobody said facebook's use of data is okay and everyone aware of this dislikes it; the same applies here.

there's a seriously concerning argument being circled here; "if people don't like us taking their posts, they should just stop posting". this kind of arrogance is exactly how AI researches have earned such a terrible reputation. it appears standard practice for AI training to hold a complete disregard for copyright, user privacy, and research ethics. whenever these issues are brought up, the frequent responses are "you posted it online, so i can do what i want" and "it's not that big a deal", until an actual lawsuit is filed, when the argument hard pivots to "fair use".

people have a right to participate in the open internet- nobody gave you the right to harvest their participation for profit.

every single time a situation like this occurs, the reputation of AI research gets worse. please do better.

So, this is where the Streisand Effect happened, eh?

The funny thing is, social media posts, especially on a twitter-like format (small sized posts with largely irrelevant replies) have very little value. It's kinda worthless to train LLM with them, unless you're actively trying to dumb down the agent or to mimic those users. Only real use I can think of would be a moderation agent, I guess as more examples, even of low quality, is still better than less.

At the end of the day, any publicly accessible website is being archived and used in datasets (and also for targeted ads, to "improve" said website, for AI, and anything in between) no matter if you like it or not. The difference here is that it would have helped open source research instead of opaque closed-source ones.

oh well as long as everyone's doing it that's okay

No, my point was:

  1. BlueSky lied to you if they told you anything about some kind of anti-harvesting policy or system on their website.
  2. Every single bit of everything you ever entered in a text box on the internet has already been logged and stored, and has been for decades (and for way worse things than silly talking robots)
  3. You have no expectation of privacy on the internet.

I was not making a judgement value. I'm just noting that you're having a very selective kind of indignation (and, here, counterproductive).

Watching people going crazy over public social media drivel being used for something else than wasting disk space has been very amusing, so there's that.

Nice trolling. I see the CEO of this company does not think this should be deleted this time. Hope this company and community enjoy all the attentions.

https://bsky.app/profile/alpindale.bsky.social/post/3lbxgfmos7s2c
https://huggingface.co/datasets/alpindale/two-million-bluesky-posts/discussions/21

Noce trolling. I see the CEO of this company does not think this should be deleted this time. Hope this company and community enjoy all the attentions.

https://bsky.app/profile/alpindale.bsky.social/post/3lbxgfmos7s2c
https://huggingface.co/datasets/alpindale/two-million-bluesky-posts/discussions/21

This-is-disgusting-they-are-stealing-all-of-our-information-to-train-their-data-stealers-Please-comment-on-it-to-try-to-take-it-down-and-save-bluesky-r-BlueskySocial-11-27-2024_03_02_PM.png

From all this interaction, one thing is clear. People have imparied reading comprehension or don't even know how to read.

If you don't want your data to be included in a dataset, don't interact with any internet services at all.

Do you want companies to give you false assurances that they are not using your data but in the backend, they are definitely doing it but they are just saying they are not going to do it so that you keep using their platform? Which do you think is better?

Bluesky is by design an open platform and Bluesky the app doesn't host your data, it is just a consumer of the platform atproto, the decentralized protocol. Whatever you post, goes to atproto the platform to be saved and then Bluesky the app shows it you.

Your rights on words are worth as much as toilet paper and even what I have written and will write are all worth toilet paper if I can't enforce my said rights. And if I want to enforce the said "rights", I need money to sue which individual people can do jack against a giant corporation.

A gentleman who so made fun of my account name more than the argument I presented says a lot about them. I created this account a while back and chose this username and I am using this account after several years. If you want I can change it something more appealing to you but my point will still stand.

You all just want people do this stuff in private so that you don't know about it. I guess that would be better for you all.

Ignorance is bliss, we all know that.

Cheers.

There are distinct legal issues being brought up in other Bluesky datasets on this site, not to mention the obvious ethical issues at play when so many Bluesky users, myself included, do not consent to this kind of data scraping for AI training purposes. I recommend you read them, as your data gathering for this dataset is not in the best interests of the ML community, as mentioned in https://huggingface.co/datasets/alpindale/two-million-bluesky-posts/discussions/29

I demand you take down this dataset.

tl;dr

  • you should be able to scrape data that people put out into the world that is public that ISN'T considered "creative work". maybe you can mark your posts as creative work to prevent scraping.
  • if you don't have community datasets, companies will step in and make them themselves, often with less respect for common rules
  • there could be useful research on these datasets, such as how often alt-text is used, or how often alt-text is fully accurate to words in the image
  • read the full thing though

I think people in this thread and within the bsky community have taken a stance on this that is too restrictive and actually has some negative consequences.

It seems to me that, given the fact that the law is lagging behind by many years and will definitely not rule in a way that completely puts things like OpenAI out of business, we need to be smart about what we are demanding here. I don't think there are fundamental issues with scraping the network for public speech which could not be considered "creative work". The bad practice of training data comes when you take creative work and use it to train models that can then reproduce work similar to the original artist, but don't compensate them, or if you scrape large amounts of these social posts and don't deidentify them. I don't think you idiots need to be compensated for your bsky shitposts. I think any image which doesn't just contain text should be automatically assumed to be creative work, and not be stored. If a user has a text post that they consider "creative work" that they don't want to be added to training sets, they should be able to label it as such and training sets should respect that. But in the end, all posts should be deidentified so that within the dataset, you can't easily build profiles of specific people. I'd prefer it if all posts were disconnected from accounts and were just all added to one pile, but if a researcher needed posts to be connected to profiles I think they should have to contact bsky directly and ask for permission to have that data and use it for only a certain research purpose.

We need to remember that the network could be scraped regardless of what rules we attempt to put in place. And if researches don't create these datasets, companies will, but companies won't pay attention to these guardrails like researches would.

We could be missing out on a good amount of research by disallowing these public datasets, such as information on how often alt-text is used, how often alt-text is fully accurate to the words in the image, or sentiment analysis of the network on a regular basis. Yes, I'm sure companies will use these datasets as input for LLMs, but I don't think that's inherently wrong. It's only wrong if they don't deidentify the data or they use content marked as "creative work" (which all non-text images should be by default) to train with.

Sign up or log in to comment