Unfortunately, Glaze does not seem to work. When I've trained a simple style LoRA on a few sets of glazed images using SDXL, the LoRA was still able to reproduce their style.
Another unfortunate consequence of the introduction of Glaze and Nightshade is that some artists which I follow have now started glazing all of their new works which they publish, leading to quite ugly results due to the noise that Glaze produces on high settings, despite questionable efficacy.
It's snake oil, and it'd be snake oil even if it worked.
I've yet to hear of it doing anything. I've never heard anyone in an AI group worried about it in any way. No "damn, Glaze ruined my LoRA". To the extent anyone talks about it, it's either non-technical artist groups, or AI groups where somebody intentionally sets out to play with it to see if they can actually make it do something.
But even if it worked in its intended scope, even then it'd be snake oil. Because you can't defeat every AI system simultaneously. Flaws can be exploited, but flaws aren't guaranteed to (and almost certainly won't be) conserved on the long term. So anything that works now isn't going to work tomorrow. And defending against known models today is pointless because they were already successfully created.
The whole idea of attacking an already finished product is a fundamentally flawed approach, and would only possibly work in extremely unlikely and contrived cases. Like v1 not being very good, so the model's maker for some reason decided to pull in additional data, long past a well published adversarial attack on v1, and incorporate that into v2.
>> Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style.
That's supposed be the single most important sentence for the entire article, but ended being a mouthful which hardly makes sense.
>> So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.
"when" and "then" don't work like that.
I' still trying to see a crisp solution statement beyond "is a system designed to protect human artists by disrupting style mimicry.".
From what I remember Glaze is using some small CLIP model and LPIPS (based on VGG) for their adversarial loss, that's why it's so ineffective to large, better trained model.
It use SD to do a style transfer on the image using image-to-image, then it use gradient descent on the image itself to lower the difference between CLIP embeddings of the original and style transfer image + trying to maintain LPIPS, then every step is normalized to not exceed a certain threshold from the original image.
So essentially it's an adversarial attack against a small CLIP model, even though today's models are much robust than that.
Snake oil. Even if it worked in a way that wouldn't be bypassed quickly, it was too late, and the few artists who've applied it aren't enough to matter in the next training runs. Watching artists pull down years, sometimes decades of already scraped galleries to apply sketchy anti-AI magic was distressing.
Their objective is not so much to fight mass scrapping but to prevent fine-tunes with their name on Civitai, copying them specifically. Which happens a lot.
Sadly I agree that Glaze doesn't really work for it.
Unfortunately, Glaze does not seem to work. When I've trained a simple style LoRA on a few sets of glazed images using SDXL, the LoRA was still able to reproduce their style.
Another unfortunate consequence of the introduction of Glaze and Nightshade is that some artists which I follow have now started glazing all of their new works which they publish, leading to quite ugly results due to the noise that Glaze produces on high settings, despite questionable efficacy.
Ironically it tends to introduce the kind of artifacts that can exist in AI-generated pics.
It's snake oil, and it'd be snake oil even if it worked.
I've yet to hear of it doing anything. I've never heard anyone in an AI group worried about it in any way. No "damn, Glaze ruined my LoRA". To the extent anyone talks about it, it's either non-technical artist groups, or AI groups where somebody intentionally sets out to play with it to see if they can actually make it do something.
But even if it worked in its intended scope, even then it'd be snake oil. Because you can't defeat every AI system simultaneously. Flaws can be exploited, but flaws aren't guaranteed to (and almost certainly won't be) conserved on the long term. So anything that works now isn't going to work tomorrow. And defending against known models today is pointless because they were already successfully created.
The whole idea of attacking an already finished product is a fundamentally flawed approach, and would only possibly work in extremely unlikely and contrived cases. Like v1 not being very good, so the model's maker for some reason decided to pull in additional data, long past a well published adversarial attack on v1, and incorporate that into v2.
>> Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style.
That's supposed be the single most important sentence for the entire article, but ended being a mouthful which hardly makes sense.
>> So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.
"when" and "then" don't work like that.
I' still trying to see a crisp solution statement beyond "is a system designed to protect human artists by disrupting style mimicry.".
It doesn't do anything. It shouldn't be shared in case people who do not know better are tricked into believing it does
Personally i use ai to generate style descriptions without the artists name and the songs name, to work around this.
This page is light on technical detail. What does Glaze do to an image specifically?
Details are in the paper: https://people.cs.uchicago.edu/~ravenben/publications/pdf/gl...
Don't quite have the domain knowledge to evaluate, but the claims are outlandish
Cat and mouse ...
> Many work primarily on mobile devices
I hope they mean tablets here, and not phones. I can't imagine any artist being more productive or effective on a tiny screen vs a large screen.
From what I remember Glaze is using some small CLIP model and LPIPS (based on VGG) for their adversarial loss, that's why it's so ineffective to large, better trained model.
It use SD to do a style transfer on the image using image-to-image, then it use gradient descent on the image itself to lower the difference between CLIP embeddings of the original and style transfer image + trying to maintain LPIPS, then every step is normalized to not exceed a certain threshold from the original image.
So essentially it's an adversarial attack against a small CLIP model, even though today's models are much robust than that.
Snake oil. Even if it worked in a way that wouldn't be bypassed quickly, it was too late, and the few artists who've applied it aren't enough to matter in the next training runs. Watching artists pull down years, sometimes decades of already scraped galleries to apply sketchy anti-AI magic was distressing.
Their objective is not so much to fight mass scrapping but to prevent fine-tunes with their name on Civitai, copying them specifically. Which happens a lot.
Sadly I agree that Glaze doesn't really work for it.
Unfortunately, questioning glaze gets you labelled as an enemy. "They want you to think it doesn't work", etc.