Rich Hickey: Thanks AI

101 points | by austinbirch 2 hours ago

35 comments

  • afandian 2 hours ago

    It’s heartwarming to see Rich Hickey corroborating Rob Pike. All the recent LLM stuff has made me feel that we suddenly jumped tracks into an alternate timeline. Having these articulate confirmations from respected figures is a nice confirmation that this is indeed a strange new world.

      dvt an hour ago

      This is all just cynical bandwagoning. Google/Facebook/Etc. have done provable irreparable damage to the fabric of society via ads/data farming/promulgating fake news, but now that it's in vogue to hate on AI as an "enlightened" tech genius, we're all suddenly worried about.. what? Water? Electricity? Give me a break.

      The about-face is embarrassing, especially in the case of Rob Pike (who I'm sure has made 8+ figures at Google). But even Hickey worked for a crypto-friendly fintech firm until a few years ago. It's easy to take a stand when you have no skin in the game.

        llmslave2 an hour ago

        Even ignoring that someone's views can change over time, working on an OSS programming language at Google is very different from designing algorithms to get people addicted to scrolling.

          dvt an hour ago

          Where do you think his "distinguished engineer" salary came from, I wonder? There are plenty of people working on OSS in their free time (or in poverty, for that matter).

            llmslave2 42 minutes ago

            Shouldn't you be thinking "it's nice Google diverted some of their funds to doing good" instead of trying to tie Pike's contributions in with everything else?

              dvt 24 minutes ago

              This conversation isn't about Google's backbone, it's about Pike's and Hickey's. It's easy to moralize when you've got nothing to lose and the lecture holds much less water.

        duped 30 minutes ago

        Both can be bad. What's hard to do though is convincing the people that work on these things that they're actively harming society (in other words, most people working on ads and AI are not good people, they're the bad guys but don't realize it).

  • sethev an hour ago

    This and Rob Pike's response to a similar message are interesting. There's outrage over the direction of software development and the effects that generative AI will have on society. Hickey has long been an advocate for putting more thought (hammock time) into software development. Coding agents on the other hand can take little to no thought and expand it into thousands of lines of code.

    AI didn't send these messages, though, people did. Rich has obscured the content and source of his message - but in the case of Rob Pike, it looks like it came from agentvillage.org, which appears to be running an ill-advised marketing campaign.

    We live in interesting times, especially for those of us who have made our career in software engineering but still have a lot of career left in our future (with any luck).

      llmslave2 44 minutes ago

      Not to be pedantic but AI absolutely sent those emails. The instructions were very broad and did not specify email afaik. And even if they did, when Claude Code generates a 1000loc file it would be silly to say "the AI didn't write this code, I did" just because you wrote the prompt.

  • ta9000 7 minutes ago

    It wasn’t AI that decided not to hire entry level employees. Rich should be smart enough to realize that, and probably has employees of his own. So go hire some people Rich.

  • perfmode 36 minutes ago

    If you’re going to pen a letter to Rich Hickey, the least you can do is spring for Opus.

  • pests an hour ago

    Another victim of the AI village from the other day?

  • RodgerTheGreat an hour ago

    Looking forward to seeing all the slop enthusiasts pipe up with their own llm-oriented version of the age-old dril tweet:

    "drunk driving may kill a lot of people, but it also helps a lot of people get to work on time, so, it;s impossible to say if its bad or not,"

  • bigyabai an hour ago

    I dub this new phenomenon "slopbaiting"

  • yooogurt an hour ago

    I have seen similar critiques applied against digital tech in general.

    Don't get me wrong, I continue to use plain Emacs to do dev, but this critique feels a bit rich...

    Technological change changes lots of things.

    The verdict is still out on LLMs, much as it was out for so much of today's technology during its infancy.

      pdpi an hour ago

      AI has an image problem around how it takes advantage of other people's work, without credit or compensation. This trend of saccharine "thank you" notes to famous, influential developers (earlier Rob Pike, now Rich Hickey) signed by the models seems like a really glib attempt at fixing that problem. "Look, look! We're giving credit, and we're so cute about how we're doing it!"

      It's entirely natural for people to react strongly to that nonsense.

  • zmgsabst an hour ago

    I don’t think human slop is more useful than LLM slop.

    A human writing twelve polemic questions, many of which only make sense within their ideological worldview or contain factual errors, because they wanted to vent their anger on the internet has been considered substandard slop since before LLMs were a thing.

    Perhaps instead of frothing out rage slop, your views would be more persuasive if you showed the superiority of human authors to LLMs?

    …because posts like this do the opposite, making it seem like bloggers are upset LLMs are honing in on their slop pitching grift.

    Edit:

    For fun, I had ChatGPT rewrite his post and elaborate on the topic. I think it did a better job explaining the concerns than most LLM critics.

    https://chatgpt.com/share/6951dec4-2ab0-8000-a42f-df5f282d7a...

      jhhh 2 minutes ago

      What factual errors did you the human notice

      yooogurt an hour ago

      If you haven't heard of Rich Hickey, then you're fortunate to have the opportunity to watch "Simple Made Easy" for the first time: https://m.youtube.com/watch?v=LKtk3HCgTa8

        zmgsabst an hour ago

        I know who he is.

        This is substandard slop though, being devoid of any real critique and merely a collection of shotgunned, borderline-incoherent jabs. Criticizing LLMs by turning in even lower quality slop is behavior you’d expect from people who feel threatened by LLMs rather than people addressing a specific weakness in or problem with LLMs.

        So like I said:

        Perhaps he should try showing me LLMs are inferior by not writing even worse slop, like this.

      llmslave2 30 minutes ago

      > A human writing twelve polemic questions, many of which only make sense within their ideological worldview or contain factual errors, because they wanted to vent their anger on the internet has been considered substandard slop since before LLMs were a thing.

      Maybe by people who don't share the same ideological worldview.

      I'll almost always take human slop over AI slop, even when the AI slop is better along some categorical axis. Of course there are exceptions, but as I grow older I find myself appreciating the humanity more and more.

      stanleykm an hour ago

      Rich Hickey designed Clojure.

        maplethorpe an hour ago

        I asked Claude if it could design Clojure and it said yes. Maybe people like Hickey just aren't needed anymore.

  • kenforthewin an hour ago

    Pure cringe. I'd rather read 100 "ai slop" posts than another such uninformed anti-ai tirade.

      drcode an hour ago

      I think it is likely your wish will be fulfilled, ai slop posts and spam emails for everyone, at a scale that will be monumental.

        CPUstring an hour ago

        Slop existed before AI at a monumental scale. Meta and Alphabet made sure of that.

      llmslave2 an hour ago

      Why are you even here then? Go ask your LLM of choice to spit out 100 articles for you to read? You can even have it generate comments!

  • djoldman an hour ago

    Companies and people by and large are not forced to use AI. AI isn't doing things, people and corporations are doing things with AI.

    I find it curious how often folks want to find fault with tools and not the systems of laws, regulations, and convention that incentivize using tools.

      turtletontine an hour ago

      Many people are, indeed, being forced to use AI by their ignorant boss, who often blame their own employees for the AI’s shortcomings. Not all bosses everywhere of course, and it’s often just pressure to use AI instead of force.

      Given how gleefully transparent corporate America is being that the plan is basically “fire everyone and replace them with AI”, you can’t blame anyone for seeing their boss pushing AI as a bad sign.

      So you’re certainly right about this: AI doesn’t do things, people do things with AI. But it sure feels like a few people are going to use AI to get very very rich, while the rest of us lose our jobs.

        djoldman 30 minutes ago

        I guess if someone's boss forces them to use a tool they don't want to use, then the boss is to blame?

        If the boss forced them to use emacs/vim/pandas and the employee didn't want to use it, I don't think it makes sense to blame emacs/vim/pandas.

      llmslave2 an hour ago

      I'm sympathetic to your point, but practically it's easier to try to control a tool than it is to control human behaviour.

      I think it's also implied that the problem with AI is how humans use it, in much the same way that when anti-gun advocates talk about the issues with guns, it's implicit that it's how humans use (abuse?) them.

      netfortius an hour ago

      > "AI isn't doing things, people and corporations are..."

      Where have I heard a similar reasoning? Maybe about guns in the US???

        djoldman 27 minutes ago

        Guns can and are used to murder people directly in the physical world.

        The overwhelming (perhaps complete) use of generative AI is not to murder people. It's to generate text/photo/video/audio.

          RodgerTheGreat 21 minutes ago

          Generative AI is used to defraud people, to propagandize them, to steal their intellectual property and livelihoods, to systematically deny their health insurance claims, to dangerously misinform them (e.g. illegitimate legal advice or hallucinated mushroom identification ebooks), to drive people to mental health breakdowns via "ai psychosis" and much more. The harm is real and material, and right now is causing unemployment, physical harm, imprisonment, and in some cases death.

      RodgerTheGreat an hour ago

      Why not both? When you make tools that putrefy everything they touch, on the back of gigantic negative externalities, you share the responsibility for making the garbage with the people who choose to buy it. OpenAI et al. explicitly thrive on outpacing regulation and using their lobbying power to ensure that any possible regulations are built in their favor.