19 comments

  • Antibabelic an hour ago

    I found the page Wikipedia:Signs of AI Writing[1] very interesting and informative. It goes into a lot more detail than the typical "em-dashes" heuristic.

    [1]: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

      jcattle 23 minutes ago

      An interesting observation from that page:

      "Thus the highly specific "inventor of the first train-coupling device" might become "a revolutionary titan of industry." It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated."

        embedding-shape 9 minutes ago

        I think that's a general guideline to identify "propaganda", regardless of the source. I've seen people in person write such statements with their own hands/fingers, and I know many people who speak like that (shockingly, most of them are in management).

        Lots of those points seems to get into the same idea which seems like a good balance. It's the language itself that is problematic, not how the text itself came to be, so makes sense to 100% target what language the text is.

        Hopefully those guidelines make all text on Wikipedia better, not just LLM produced ones, because they seem like generally good guidelines even outside the context of LLMs.

        robertjwebb 11 minutes ago

        The funny thing about this is that this also appears in bad human writing. We would be better off if vague statements like this were eliminated altogether, or replaced with less fantastical but verifiable statements. If this means that nothing of the article is left then we have killed two birds with one stone.

        bspammer 11 minutes ago

        That sounds like Flanderization to me https://en.wikipedia.org/wiki/Flanderization

        From my experience with LLMs that's a great observation.

        eurekin 18 minutes ago

        That's actually putting into words, what I couldn't, but felt similar. Spectacular quote

          jcattle 2 minutes ago

          I'm thinking quite a bit about this at the moment in the context of foundational models and their inherent (?) regression to the mean.

          Recently there has been a big push into geospatial foundation models (e.g. Google AlphaEarth, IBM Terramind, Clay).

          These take in vast amounts of satellite data and with the usual Autoencoder architecture try and build embedding spaces which contain meaningful semantic features.

          The issue at the moment is that in the benchmark suites (https://github.com/VMarsocci/pangaea-bench), only a few of these foundation models have recently started to surpass the basic U-Net in some of the tasks.

          There's also an observation by one of the authors of the Major-TOM model, which also provides satellite input data to train models, that the scale rule does not seem to hold for geospatial foundation models, in that more data does not seem to result in better models.

          My (completely unsupported) theory on why that is, is that unlike writing or coding, in satellite data you are often looking for the needle in the haystack. You do not want what has been done thousands of times before and was proven to work. Segmenting out forests and water? Sure, easy. These models have seen millions of examples of forests and water. But most often we are interested in things that are much, much rarer. Flooding, Wildfire, Earthquakes, Destroyed buildings, new Airstrips in the Amazon, etc. etc.. But as I see it, the currently used frameworks do not support that very well.

          But I'd be curious how others see this, who might be more knowledgeable in the area.

  • feverzsj 4 minutes ago

    Didn't they just sells access to all the AI giants?

  • maxbaines an hour ago

    This is hardly surprising given - New partnerships with tech companies support Wikipedia’s sustainability. Which relies on Human content.

    https://wikimediafoundation.org/news/2026/01/15/wikipedia-ce...

      jraph an hour ago

      I agree with the dig, although it's worth mentioning that this AI Cleanup page's first version was written on the 4th of December 2023.

  • KolmogorovComp an hour ago

    I wish they also spent on the reverse: automatic rephrasing of the (many) obscure and very poorly worded and/or with no neutral tone whatsoever.

    And I say that as a general Wikipedia fan.

      philipwhiuk an hour ago

      WP:BOLD and start your own project to do it.

  • weli 22 minutes ago

    I don't see how this is going to work. 'It sounds like AI' is not a good metric whatsoever to remove content.

      csande17 9 minutes ago

      [delayed]

      embedding-shape 7 minutes ago

      If that's your takeaway, you need to read the submission again, because that's not what they're suggesting or doing.

      ramon156 15 minutes ago

      This is about wiping unsourced and fake AI generated content, which can be confirmed by checking if the sources are valid

  • progbits 38 minutes ago

    The Sanderson wiki [1] has a time-travel feature where you read a snapshot just before a publication of a book, ensuring no spoilers.

    I would like a similar pre-LLM Wikipedia snapshot. Sometimes I would prefer potentially stale or incomplete info rather than have to wade through slop.

    1: https://coppermind.net/wiki/Coppermind:Welcome

      Antibabelic 33 minutes ago

      But you can already view the past version of any page on Wikipedia. Go to the page you want to read, click "View history" and select any revision before 2023.

        progbits 32 minutes ago

        I know but it's not as convenient if you have to keep scrolling through revisions.