Wikipedia talk:AI-generated content
![]() | This project page does not require a rating on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||
|
Use cases of AI image generators
[edit]Can AI image generators like Stable Diffusion be used to generate original images for presenting rendition for subjects that can't simply be captured? 2001:448A:3043:762C:746D:DED5:4321:8398 (talk) 01:35, 8 February 2023 (UTC)
- In some limited cases, I think yes. I believe it would be appropriate for human-curated illustration type portraits similar to the Drawtober project on French Wikipedia, and perhaps other illustrations as well (for example, architectural recreations). It would probably not be appropriate for photo-realistic images of people, as this could mislead readers. Pharos (talk) 23:50, 13 February 2023 (UTC)
- Here is a domain where AI-generated images might be particularly useful on Wikipedia articles: c:Category:Mythological illustrations by Midjourney.--Pharos (talk) 18:50, 14 February 2023 (UTC)
- Why would this be useful? A generated image inherently misrepresents how a mythological topic was interpreted by the culture that created it. Ibadibam (talk) 18:04, 24 January 2025 (UTC)
- Possibly when the quality improves. This is an evolving topic. I reverted this insertion of an AI-generated "photo" to illustrate Apache HTTP server. —DIYeditor (talk) 15:53, 30 May 2023 (UTC)
- I would say sometimes. It depends on the image and should be decided on a case-by-case basis. I would oppose a general ban, but I'd definitely say that we should never use them for photorealistic images of specific named people, which could introduce confusion (and BLP issues for living ones.) OTOH for images intended to illustrate articles about AI I would say they're uncontroversial enough to be allowed currently, and for images of other topics using them ought to be possible. --Aquillion (talk) 17:09, 30 May 2023 (UTC)
- Like the others have said, I believe AI-generated imagery can be an invaluable tool for illustrating abstract concepts that would be difficult or impossible to capture in the real world such as this image used on French Wiktionary's article on Buridan's ass. I could also see their suitable use in mythology to illustrate stories or figures for which no free alternatives exist, which is very common, especially for lesser-known countries. There should definitely be restrictions, but I think completely disallowing AI-generated imagery is pretty short-sighted. Elspamo4 (talk) 19:03, 3 June 2024 (UTC)
- This is a good example of a generated image that distorts the original concept. The ass is meant to be placed either between two identical food choices, or between equally pressing choices of food and water. But the image generator makes the two feed bags appear to contain different substances. Unsurprisingly, the image had already been replaced in that article by the time of your comment. Ibadibam (talk) 18:04, 24 January 2025 (UTC)
- I most certainly do not see AI-generated imagery as a suitable use to visualize lesser-known cultures, given how AI generators are biased by what is and isn't present in their training data. If an AI has no training on a subject, it's not going to produce an accurate image of it. Granted, one could argue that the same could be said of human artists drawing things based on what they've heard and not seen or experienced, but I don't buy AI as the solution to this, particularly given its propensity to hallucinate or get things completely wrong. Black Nazis, anyone? Not every article needs an image, and unless there's an existing AI image that has been widely reported on by multiple sources for its accuracy or notability, we shouldn't be using AI-generated images for the sake of illustrating imageless articles, much less allowing anyone to use whatever image generator they want, using any prompt they want to achieve it. —k6ka 🍁 (Talk · Contributions) 22:31, 24 January 2025 (UTC)
Requested move 5 March 2025
[edit]- The following is a closed discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review after discussing it on the closer's talk page. No further edits should be made to this discussion.
The result of the move request was: page moved. Andrewa (talk) 01:31, 14 March 2025 (UTC)
Wikipedia:Computer-generated content → Wikipedia:AI-generated content –
- I think this makes sense? Hopefully? Mild support as proposed Red Slash 17:53, 8 March 2025 (UTC)
- Support per nom, the argument makes sense. History6042😊 (Contact me) 13:08, 13 March 2025 (UTC)
- Support - If this
essayproposal had been around longer, it would likely be more of a debate, however since it was created in 2023, it seems fair that "AI" would be the most recognizable term to use here, especially as it is specifically referring to machine learning and similar concepts, rather than something like "computer-generated graphics" which would be significantly different from "AI-generated graphics" (in that case, rendering itself, which at one point was novel enough to be termed "computer-generated", and I wouldn't be surprised if the meaning of "computer-generated" itself begins to skew over time.) ASUKITE 19:24, 13 March 2025 (UTC)
Identification
[edit]I recommend a section about how AI-generated content is to be identified. One example would be citation style, or even edits that cite Wikipedia itself. Drew Stanley (talk) 01:38, 27 March 2025 (UTC)
Problem of recursive AI editing of Wikipedia
[edit]I think we have to accept that Wikipedia now is a major source for people rather than just thinking it is outside of normal life. This for instance is why we have so many POV pushers desperately trying to change thing in it. This means it is a major source for LLM's. And unfortunately they are pretty good at truthiness or plausibility. If you look at AI models collapse when trained on recursively generated data you can see there is a big problem with this that will get worse the more LLM's are used. What we will have is persistent POV pushers for all sorts of silly things making articles worse even if they read better and get more liable to pass checks.
I think we need to accept that Wikipedia is a major source in which case we should strongly reject any LLM changes and just accept the human generated part of it as part of its basic reliability as far as being use as such a source - even if the text isn't as good as an LLM might transform it into. NadVolum (talk) 22:01, 17 April 2025 (UTC)
- "What we will have is persistent POV pushers for all sorts of silly things making articles worse even if they read better and get more liable to pass checks."
- That's been happening since around 2004 when Wikipedia results started getting boosted by Google to the top of their search results. I feel like AI is actually making things better, but savvy editors who are getting 80% of the sources in a typical article reliably verified in a few minutes don't want to admit the're using AI for anything because they'll get flamed. And you're certainly not going to see that on Toolforge or even the next version of ORES any time soon, it's just too much of a hot potato.
- The thing is, all the kinds of pernicious POV pushing happened way before AI, by human editors who can sound more encyclopedic than AI can, so all the watchlister gnomes know exactly what to look for. Can you point to an example where AI made POV pushing more successful for any substantial length of time? Cramulator (talk) 01:53, 28 April 2025 (UTC)