Woah, the thing that leapt out at me, as a professor, is that they somehow got an exemption from the UMN institutional review board. Uh, how?? It's clearly human subjects research under the conventional federal definition[1] and obviously posed a meaningful risk of harm, in addition to being conducted deceptively. Someone has to have massively been asleep at the wheel at that IRB.
The whole story is a good example of why there are IRBs in the first place --- in any story not about this Linux kernel fiasco people generally cast them as the bad guys.
The ultimate problem is that it's easy to fake stuff so you have to use heuristics to see who you can trust. You sort of sum up your threat score and then decide how much attention to apply. Without doing something like that, the transaction costs dominate and certain valuable things can't be done. It's true that Western universities are generally a positive component to that score and students under a professor there are another positive component to the score.
It's like if my wife said "I'm taking the car to get it washed" and then she actually takes the car to the junkyard and sells it. "Ha, you got fooled!". I mean, yes, obviously. She's on the inside of my trust boundary and I don't want to live a life where I'm actually operating in a way immune to this 'exploit'.
I get that others object to the human experimentation part of things and so on, but for me that could be justified with a sufficiently high bar of utility. The problem is that this research is useless.
No, random anonymous contributors with cheng3920845823@gmail.com as their email address are not as trustworthy as your wife, and blindly merging PRs from them into some of the most security-critical and widely used code in the entire world without so much as running a static analyzer is not reasonable.
Oh I misunderstood the sections in the article about the umn.edu email stuff. My mistake. The actual course of events:
1. Prof and students make fake identities
2. They submit these secret vulns to Greg KH and friends
3. Some of these patches are accepted
4. They intervene at this point and reveal that the patches are malicious
5. The patches are then not merged
6. This news comes out and Greg KH applies big negative trust score to umn.edu
7. Some other student submits a buggy patch to Greg KH
8. Greg KH assumes that it is more research like this
9. Student calls it slander
10. Greg KH institutes policy for his tree that all umn.edu patches should be auto-rejected and begins reverts for all patches submitted in the past by such emails
To be honest, I can't imagine any other such outcome could have occurred. No one likes being cheated out of work that they did, especially when a lot of it is volunteer work. But I was wrong to say the research was useless. It does demonstrate that identities without provenance can get malicious code into the kernel.
Perhaps what we really need is a Social Credit Score for OSS ;)
> 4. They intervene at this point and reveal that the patches are malicious
> 5. The patches are then not merged
It's not clear to me that they revealed anything, just that they did fix the problems:
> In their paper, Lu and Wu claimed that none of their bugs had actually made it to the Linux kernel — in all of their test cases, they’d eventually pulled their bad patches and provided real ones. Kroah-Hartman, of the Linux Foundation, contests this — he told The Verge that one patch from the study did make it into repositories, though he notes it didn’t end up causing any harm.
(I'm only working from this article, though, so feel free to correct me)
The authors were 100% in the right, and GKH was 100% in the wrong. It's very amusing to go back and read all of the commenters calling for the paper authors to face criminal prosecution. The fact is that they provided a valuable service and exposed a genuine issue with kernel development policies. Their work reflected poorly on kernel maintainers, and so those maintainers threw a hissy fit and brigaded the community against them.
Also, banning umn.edu email addresses didn't even make sense since the hypocrite commits were all from gmail addresses.
Woah, the thing that leapt out at me, as a professor, is that they somehow got an exemption from the UMN institutional review board. Uh, how?? It's clearly human subjects research under the conventional federal definition[1] and obviously posed a meaningful risk of harm, in addition to being conducted deceptively. Someone has to have massively been asleep at the wheel at that IRB.
[1] https://grants.nih.gov/policy-and-compliance/policy-topics/h...
The whole story is a good example of why there are IRBs in the first place --- in any story not about this Linux kernel fiasco people generally cast them as the bad guys.
A reteroactive exception!
The ultimate problem is that it's easy to fake stuff so you have to use heuristics to see who you can trust. You sort of sum up your threat score and then decide how much attention to apply. Without doing something like that, the transaction costs dominate and certain valuable things can't be done. It's true that Western universities are generally a positive component to that score and students under a professor there are another positive component to the score.
It's like if my wife said "I'm taking the car to get it washed" and then she actually takes the car to the junkyard and sells it. "Ha, you got fooled!". I mean, yes, obviously. She's on the inside of my trust boundary and I don't want to live a life where I'm actually operating in a way immune to this 'exploit'.
I get that others object to the human experimentation part of things and so on, but for me that could be justified with a sufficiently high bar of utility. The problem is that this research is useless.
No, random anonymous contributors with cheng3920845823@gmail.com as their email address are not as trustworthy as your wife, and blindly merging PRs from them into some of the most security-critical and widely used code in the entire world without so much as running a static analyzer is not reasonable.
Oh I misunderstood the sections in the article about the umn.edu email stuff. My mistake. The actual course of events:
1. Prof and students make fake identities
2. They submit these secret vulns to Greg KH and friends
3. Some of these patches are accepted
4. They intervene at this point and reveal that the patches are malicious
5. The patches are then not merged
6. This news comes out and Greg KH applies big negative trust score to umn.edu
7. Some other student submits a buggy patch to Greg KH
8. Greg KH assumes that it is more research like this
9. Student calls it slander
10. Greg KH institutes policy for his tree that all umn.edu patches should be auto-rejected and begins reverts for all patches submitted in the past by such emails
To be honest, I can't imagine any other such outcome could have occurred. No one likes being cheated out of work that they did, especially when a lot of it is volunteer work. But I was wrong to say the research was useless. It does demonstrate that identities without provenance can get malicious code into the kernel.
Perhaps what we really need is a Social Credit Score for OSS ;)
> 3. Some of these patches are accepted
> 4. They intervene at this point and reveal that the patches are malicious
> 5. The patches are then not merged
It's not clear to me that they revealed anything, just that they did fix the problems:
> In their paper, Lu and Wu claimed that none of their bugs had actually made it to the Linux kernel — in all of their test cases, they’d eventually pulled their bad patches and provided real ones. Kroah-Hartman, of the Linux Foundation, contests this — he told The Verge that one patch from the study did make it into repositories, though he notes it didn’t end up causing any harm.
(I'm only working from this article, though, so feel free to correct me)
(2021) Discussion at the time (3025 points, 1954 comments) https://news.ycombinator.com/item?id=26887670
The authors were 100% in the right, and GKH was 100% in the wrong. It's very amusing to go back and read all of the commenters calling for the paper authors to face criminal prosecution. The fact is that they provided a valuable service and exposed a genuine issue with kernel development policies. Their work reflected poorly on kernel maintainers, and so those maintainers threw a hissy fit and brigaded the community against them.
Also, banning umn.edu email addresses didn't even make sense since the hypocrite commits were all from gmail addresses.
> Also, banning umn.edu email addresses didn't even make sense since the hypocrite commits were all from gmail addresses.
The blanket ban was kicked off by another incident after the hypocrite commit incident.
Imo, the experiment was worthwhile, it exposed a risk, hopefully the kernel is better armed against similar attacks now.
Did they ever get un-banned ? IIRC, that Univ has/had great Computer Science Dept.
But there is always the BSDs.