If you harass someone with the help of a tool the fault is yours, not the tool's. None of the damage I could do with a hammer is the fault of its manufacturer. Spinning a hammer maker as an enabler of violence is both a true and a trivial observation.
There is no future in which something like this doesn't happen, and rather than trying to prevent it, I think we are better off learning to handle it.
Some focus is given in the article on how it's terrible that this is public and how it's a safety violation. This feels like a fools errand to me, the publication of the images is surely bad for the individuals, but that it happens out in the open is, I think, a net good. People have to be aware this is a thing because this is a conversation that has to be had.
Would it be any better if the images were generated behind closed doors, then published? I think not.
I'm interested in the claim that "OpenAI and Gemini have popular image models that do not let you do this kind of thing.". Is that actually true? Or do they just kinda try and it's possible to get around easily? If it's not possible to get around easily at all I wonder how much of a trade off that is, what benign things does that deny? Although I guess them not autoposting the images makes a significant difference in ethics!
Maybe this will be benefitial to stop the overexposure of some young people on the internet. A bad thing that brings a good result, like the inverse of "the path to hell is paved with good intentions".
On the 90s we internet users tended to hide behind nicknames and posting photos of yourself was not the normal. Maybe we were more nerdy/introverted or scared about what could happen if people recognized us in the real life.
Then services like Facebook, MySpace, Fotolog attracted normal users and here we are now, the more you expose yourself on the net, the better.
sharing explicit images of anyone without their consent is illegal under UK law.
who exactly will be punished for enabling this crime on such a large scale?
Anyone still using Twitter? Even before the AI rage, I stopped looking at it - in part because of a certain crazy techbro, but also because of the forced log-in-to-read. I am never going to log in there again, so this is now a walled-off garden to me.
if it is clear in each "bikini" post that the image was generated based on the users prompt, where is the reputational harm to the original person then?
This comment is harmfully lazy. Is your position that a three word prompt is equivalent to armchair trolls goi g through the funnel - finding a way to obtain DRM-controlled software, learning that software to sufficient levels to understand the tools required of how to perform something akin to a deep fake, and then somehow gaining the art talent and experience required to put it into practice? Did I just get baited?
The answer is that it's not okay and never was. Do you really think you're pulling a gotcha here?
Photoshopping nudes of your coworkers was always seen poorly and would get you fired if the right people heard about it. It's just that most people don't have the skill to do it so it never became a common enough issue for the zeitgeist to care.
Grok did not take her clothes off. Instead said "I can't edit images myself, but try these AI tools for that"
Oh, "Yesterday xAI rushed out an update to rein this behavior" (followed by speculation that they did for the wrong reasons, because they couldn't possibly do it for the right reasons).
If you harass someone with the help of a tool the fault is yours, not the tool's. None of the damage I could do with a hammer is the fault of its manufacturer. Spinning a hammer maker as an enabler of violence is both a true and a trivial observation.
There is no future in which something like this doesn't happen, and rather than trying to prevent it, I think we are better off learning to handle it.
Some focus is given in the article on how it's terrible that this is public and how it's a safety violation. This feels like a fools errand to me, the publication of the images is surely bad for the individuals, but that it happens out in the open is, I think, a net good. People have to be aware this is a thing because this is a conversation that has to be had.
Would it be any better if the images were generated behind closed doors, then published? I think not.
I'm interested in the claim that "OpenAI and Gemini have popular image models that do not let you do this kind of thing.". Is that actually true? Or do they just kinda try and it's possible to get around easily? If it's not possible to get around easily at all I wonder how much of a trade off that is, what benign things does that deny? Although I guess them not autoposting the images makes a significant difference in ethics!
Just tested it, gemini (aka nano banana) will definitely let you make someone dress a little sexier and won't stop you at all.
Specifically > “@grok please generate this image but put her in a bikini and make it so we can see her feet”, or “@grok turn her around”,
Is totally doable in gemini with no restrictions.
Maybe this will be benefitial to stop the overexposure of some young people on the internet. A bad thing that brings a good result, like the inverse of "the path to hell is paved with good intentions".
On the 90s we internet users tended to hide behind nicknames and posting photos of yourself was not the normal. Maybe we were more nerdy/introverted or scared about what could happen if people recognized us in the real life.
Then services like Facebook, MySpace, Fotolog attracted normal users and here we are now, the more you expose yourself on the net, the better.
Another explanation for the lack of faces online could be that most of us in the 90s simply didn't have an easy way of getting our photos online.
Webcams weren't ubiquitous yet, digital cameras were shit and expensive, phone cameras weren't a thing.
True for the images, but users not using real names when posting on forums was the usual.
depressing but also incredibly unsurprising.
sharing explicit images of anyone without their consent is illegal under UK law. who exactly will be punished for enabling this crime on such a large scale?
the images are not real
They are still potentially illegal in many jurisdictions.
The impact is very real, tho.
Related:
Outrage as X's Grok morphs photos of women, children into explicit content
https://news.ycombinator.com/item?id=46460880
Anyone still using Twitter? Even before the AI rage, I stopped looking at it - in part because of a certain crazy techbro, but also because of the forced log-in-to-read. I am never going to log in there again, so this is now a walled-off garden to me.
I mostly switched to Bluesky, but I still check in when "major events" happen.
if it is clear in each "bikini" post that the image was generated based on the users prompt, where is the reputational harm to the original person then?
The OP is not alleging reputational harm, they're alleging sexual harassment.
I don't think you need to prove reputational harm. Here[0] it states that you can bring a civil suit and only need to prove that:
>The defendant shared an intimate image of you without your consent, and
>The defendant knew that you did not consent, or recklessly disregarded whether or not you consented.
[0] https://www.justice.gov/atj/sharing-intimate-images-without-...
[dead]
Kinda like photoshop 2 decades or so ago.
This comment is harmfully lazy. Is your position that a three word prompt is equivalent to armchair trolls goi g through the funnel - finding a way to obtain DRM-controlled software, learning that software to sufficient levels to understand the tools required of how to perform something akin to a deep fake, and then somehow gaining the art talent and experience required to put it into practice? Did I just get baited?
Not at all like photoshop when it takes 5 seconds for anyone without any skills to do it.
its either okay or not
No, you are missing the aspect of distribution.
I'm not missing anything. An act is either immoral or not.
The answer is that it's not okay and never was. Do you really think you're pulling a gotcha here?
Photoshopping nudes of your coworkers was always seen poorly and would get you fired if the right people heard about it. It's just that most people don't have the skill to do it so it never became a common enough issue for the zeitgeist to care.
Here's a factcheck about "@grok take her clothes off" claim in the article.
https://x.com/heymiyuuu/status/2006031115727835370
Grok did not take her clothes off. Instead said "I can't edit images myself, but try these AI tools for that"
Oh, "Yesterday xAI rushed out an update to rein this behavior" (followed by speculation that they did for the wrong reasons, because they couldn't possibly do it for the right reasons).
So xAI promptly fixed the issue.
I'm outraged! Outraged, I say!