A Brand New Mindcraft Moment

From Time of the World
Jump to: navigation, search

Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (visitor, #24616) [Hyperlink]



1. this WP article was the 5th in a collection of articles following the safety of the internet from its beginnings to relevant matters of at this time. discussing the security of linux (or lack thereof) fits nicely in there. it was also a properly-researched article with over two months of analysis and interviews, something you cannot quite claim your self for your current pieces on the topic. you don't just like the details? then say so. or even better, do one thing constructive about them like Kees and others have been trying. nevertheless foolish comparisons to old crap just like the Mindcraft studies and fueling conspiracies don't precisely assist your case. 2. "We do a reasonable job of finding and fixing bugs." let's start right here. is this assertion based on wishful pondering or cold exhausting details you're going to share in your response? based on Kees, the lifetime of safety bugs is measured in years. that is more than the lifetime of many units folks purchase and use and ditch in that interval. 3. "Issues, whether or not they are security-associated or not, are patched rapidly," some are, some aren't: let's not forget the latest NMI fixes that took over 2 months to trickle right down to stable kernels and we even have a user who has been waiting for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-systems.btrfs/49500 (FYI, the overflow plugin is the primary one Kees is attempting to upstream, think about the shitstorm if bugreports might be handled with this angle, let's hope btrfs guys are an exception, not the rule). anyway, two examples are not statistics, so as soon as once more, do you have numbers or is it all wishful thinking? (it is partly a trick question because you'll also have to explain how something gets to be determined to be safety associated which as we all know is a messy business in the linux world) 4. "and the stable-update mechanism makes these patches out there to kernel customers." besides when it does not. and yes, i have numbers: grsec carries 200+ backported patches in our 3.14 stable tree. 5. "In particular, the few builders who're working on this area have never made a severe try and get that work built-in upstream." you do not must be shy about naming us, in spite of everything you did so elsewhere already. and we additionally explained the the explanation why we haven't pursued upstreaming our code: https://lwn.net/Articles/538600/ . since i don't anticipate you and your readers to learn any of it, here is the tl;dr: if you need us to spend 1000's of hours of our time to upstream our code, you will have to pay for it. no ifs no buts, that's how the world works, that's how >90% of linux code will get in too. i personally discover it pretty hypocritic that effectively paid kernel builders are bitching about our unwillingness and inability to serve them our code on a silver platter at no cost. and earlier than somebody brings up the CII, go test their mail archives, after some preliminary exploratory discussions i explicitly asked them about supporting this long drawn out upstreaming work and got no solutions.



Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Hyperlink]



Money (aha) quote : > I suggest you spend none of your free time on this. Zero. I suggest you get paid to do this. And well. Nobody count on you to serve your code on a silver platter totally free. The Linux foundation and massive firms utilizing Linux (Google, Crimson Hat, Oracle, Samsung, and many others.) ought to pay safety specialists such as you to upstream your patchs.



Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Hyperlink]



I'd simply prefer to level out that the way in which you phrased this makes your comment a tone argument[1][2]; you've got (in all probability unintentionally) dismissed all the mum or dad's arguments by pointing at its presentation. The tone of PAXTeam's comment displays the frustration built up through the years with the way in which things work which I feel ought to be taken at face worth, empathized with, and understood fairly than simply dismissed. 1. http://rationalwiki.org/wiki/Tone_argument 2. http://geekfeminism.wikia.com/wiki/Tone_argument Cheers,



Posted Nov 7, 2015 0:55 UTC (Sat) by josh (subscriber, #17465) [Link]



Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (visitor, #24616) [Link]



why, is upstream identified for its primary civility and decency? have you ever even learn the WP submit underneath dialogue, by no means thoughts past lkml traffic?



Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]



Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (visitor, #58961) [Link]



No Argument



Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]



Please do not; it would not belong there both, and it particularly would not need a cheering section because the tech press (LWN generally excepted) tends to offer.



Posted Nov 8, 2015 8:36 UTC (Sun) by gmatht (guest, #58961) [Hyperlink]



Ok, however I used to be considering of Linus Torvalds



Posted Nov 8, 2015 16:11 UTC (Solar) by pbonzini (subscriber, #60935) [Link]



Posted Nov 6, 2015 22:Forty three UTC (Fri) by PaXTeam (guest, #24616) [Link]



Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Link]



Why must you assume only cash will fix this drawback? Yes, I agree extra assets ought to be spent on fixing Linux kernel safety issues, but do not assume someone giving a company (ahem, PAXTeam) money is the only solution. (Not imply to impugn PAXTeam's safety efforts.)



The Linux growth group might have had the wool pulled over its collective eyes with respect to safety issues (either real or perceived), however merely throwing cash at the problem won't repair this.



And yes, I do notice the industrial Linux distros do tons (most?) of the kernel development today, and that implies oblique monetary transactions, however it's much more concerned than just that.



Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]



Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Link]



Posted Nov 7, 2015 9:Forty nine UTC (Sat) by PaXTeam (visitor, #24616) [Link]



Posted Nov 6, 2015 23:13 UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]



I believe you undoubtedly agree with the gist of Jon's argument... not sufficient focus has been given to security within the Linux kernel... the article gets that half right... money hasn't been going towards security... and now it needs to. Aren't you glad?



Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (guest, #24616) [Link]



they talked to spender, not me personally, but yes, this aspect of the coin is effectively represented by us and others who had been interviewed. the same method Linus is a good consultant of, effectively, his personal pet challenge referred to as linux. > And if Jon had solely talked to you, his would have been too. on condition that i am the writer of PaX (part of grsec) sure, talking to me about grsec matters makes it the most effective ways to research it. but if you already know of someone else, be my guest and title them, i'm fairly certain the lately formed kernel self-safety folks could be dying to engage them (or not, i do not assume there is a sucker out there with 1000's of hours of free time on their hand). > [...]it also contained quite just a few of groan-worthy statements. nothing is perfect however considering the viewers of the WP, this is certainly one of the higher journalistic items on the topic, no matter how you and others don't like the sorry state of linux security exposed in there. in order for you to discuss extra technical particulars, nothing stops you from talking to us ;). speaking of your complaints about journalistic qualities, since a previous LWN article noticed it fit to incorporate a number of typical dismissive claims by Linus about the quality of unspecified grsec features with no evidence of what experience he had with the code and how recent it was, how come we didn't see you or anybody else complaining about the quality of that article? > Aren't you glad? no, or not but anyway. i've heard lots of empty phrases over the years and nothing ever manifested or worse, all the money has gone to the pointless train of fixing individual bugs and associated circus (that Linus rightfully despises FWIW).



Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Hyperlink]



Posted Nov 8, 2015 13:06 UTC (Sun) by k3ninho (subscriber, #50375) [Link]



Proper now we've got builders from huge names saying that doing all that the Linux ecosystem does *safely* is an itch that they have. Unfortunately, the surrounding cultural attitude of builders is to hit purposeful goals, and sometimes efficiency objectives. Security objectives are sometimes ignored. Ideally, the culture would shift so that we make it tough to observe insecure habits, patterns or paradigms -- that may be a activity that can take a sustained effort, not merely the upstreaming of patches. Regardless of the tradition, these patches will go upstream ultimately anyway as a result of the concepts that they embody are now well timed. I can see a approach to make it happen: Linus will settle for them when an enormous finish-user (say, Intel, Google, Fb or Amazon) delivers stuff with notes like 'here's a set of improvements, we're already utilizing them to unravel this kind of downside, here's how all the things will remain working because $evidence, observe fastidiously that you are staring down the barrels of a fork as a result of your tree is now evolutionarily disadvantaged'. It is a recreation and could be gamed; I would choose that the group shepherds customers to observe the sample of declaring downside + solution + functional check evidence + efficiency check evidence + security test proof. K3n.



Posted Nov 9, 2015 6:Forty nine UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]



And about that fork barrel: I would argue it is the opposite manner round. Google forked and misplaced already.



Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (visitor, #99377) [Hyperlink]



Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (guest, #33164) [Link]



Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Hyperlink]



So I need to confess to a certain quantity of confusion. I might swear that the article I wrote said precisely that, however you've got put a good amount of effort into flaming it...?



Posted Nov 8, 2015 1:34 UTC (Sun) by PaXTeam (guest, #24616) [Hyperlink]



Posted Nov 6, 2015 22:Fifty two UTC (Fri) by flussence (subscriber, #85566) [Link]



I personally think you and Nick Krause share reverse sides of the same coin. Programming means and basic civility.



Posted Nov 6, 2015 22:Fifty nine UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]



Posted Nov 7, 2015 0:16 UTC (Sat) by rahvin (visitor, #16953) [Hyperlink]



I hope I am wrong, but a hostile perspective isn't going to help anyone get paid. It is a time like this where something you appear to be an "professional" at and there's a demand for that expertise where you display cooperation and willingness to take part as a result of it is a chance. I am relatively shocked that someone doesn't get that, however I am older and have seen a few of those alternatives in my profession and exploited the hell out of them. You only get a number of of these in the average profession, and handful at probably the most. Typically you must invest in proving your expertise, and this is a kind of moments. It appears the Kernel community may lastly take this security lesson to coronary heart and embrace it, as stated within the article as a "mindcraft second". This is an opportunity for builders which will wish to work on Linux safety. Some will exploit the chance and others will thumb their noses at it. In the long run those builders that exploit the opportunity will prosper from it. I really feel previous even having to jot down that.



Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Link]



Perhaps there's a hen and egg drawback right here, but when in search of out and funding people to get code upstream, it helps to select folks and teams with a history of having the ability to get code upstream. It is perfectly affordable to prefer figuring out of tree, providing the power to develop spectacular and demanding security advances unconstrained by upstream requirements. That is work somebody might also want to fund, if that meets their wants.



Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (visitor, #24616) [Link]



Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Hyperlink]



You make this argument (implying you do analysis and Josh doesn't) after which fail to help it by any cite. It could be far more convincing when you surrender on the Onus probandi rhetorical fallacy and really cite details. > case in point, it was *them* who steered that they wouldn't fund out-of-tree work but would consider funding upstreaming work, besides when pressed for the details, all i bought was silence. For those following alongside at house, this is the relevant set of threads: http://lists.coreinfrastructure.org/pipermail/cii-talk about... A quick precis is that they instructed you your project was unhealthy because the code was by no means going upstream. You instructed them it was due to kernel builders angle so they should fund you anyway. They informed you to submit a grant proposal, you whined more concerning the kernel attitudes and finally even your apologist instructed you that submitting a proposal is perhaps the smartest thing to do. At that point you went silent, not vice versa as you indicate above. > obviously i won't spend time to jot down up a begging proposal simply to be advised that 'no sorry, we do not fund multi-yr tasks at all'. that's one thing that one needs to be advised upfront (or heck, be part of some public guidelines so that others will know the rules too). You seem to have a fatally flawed grasp of how public funding works. If you do not inform people why you need the money and the way you will spend it, they're unlikely to disburse. Saying I am brilliant and I do know the problem now hand over the money would not even work for most Teachers who've a solid reputation in the sphere; which is why most of them spend >30% of their time writing grant proposals. > as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not correctly credited)? jejb@jarvis> git log|grep -i 'Writer: pax.*crew'|wc -l 1 Stellar, I must say. And before you mild off on those who've misappropriated your credit, please remember that getting code upstream on behalf of reluctant or incapable actors is a vastly worthwhile and time consuming skill and certainly one of the explanations groups like Linaro exist and are well funded. If extra of your stuff does go upstream, it will likely be due to the not inconsiderable efforts of other people on this space. You now have a enterprise model promoting non-upstream safety patches to prospects. There's nothing wrong with that, it's a fairly normal first stage business mannequin, but it surely does fairly depend on patches not being upstream in the first place, calling into query the earnestness of your try to place them there. Now here's some free recommendation in my field, which is helping corporations align their businesses in open supply: The promoting out of tree patch route is always an eventual failure, notably with the kernel, because if the performance is that helpful, it will get upstreamed or reinvented in your regardless of, leaving you with nothing to promote. In case your business plan B is selling expertise, you have got to remember that it will be a hard promote when you have no out of tree differentiator left and git historical past denies that you simply had something to do with the in-tree patches. In actual fact "loopy safety particular person" will grow to be a self fulfilling prophecy. The advice? it was obvious to everybody else who read this, but for you, it's do the upstreaming yourself earlier than it gets finished for you. That means you have got a professional historical claim to Plan B and also you might even have a Plan A promoting a rollup of upstream monitor patches built-in and delivered before the distributions get round to it. Even your application to the CII couldn't be dismissed because your work wasn't going anywhere. Your different is to proceed enjoying the position of Cassandra and possibly endure her eventual destiny.



Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Link]



> Second, for the potentially viable pieces this can be a multi-year > full time job. Is the CII willing to fund tasks at that level? If not > we all would end up with a number of unfinished and partially damaged features. please present me the reply to that question. with no definitive 'sure' there is no point in submitting a proposal because this is the time-frame that in my opinion the job will take and any proposal with that requirement can be shot down immediately and be a waste of my time. and i stand by my claim that such easy primary necessities ought to be public information. > Stellar, I have to say. "Lies, damned lies, and statistics". you notice there's a couple of technique to get code into the kernel? how about you employ your git-fu to search out all of the bugreports/urged fixes that went in as a consequence of us? as for particularly me, Greg explicitly banned me from future contributions through af45f32d25cc1 so it is no marvel i don't send patches immediately in (and that one commit you discovered that went in regardless of mentioned ban is actually a very bad example because it is usually the one that Linus censored for no good purpose and made me decide to never send security fixes upstream until that apply changes). > You now have a enterprise mannequin selling non-upstream security patches to clients. now? we have had paid sponsorship for our various stable kernel collection for 7 years. i would not call it a business mannequin though as it hasn't paid anybody's bills. > [...]calling into query the earnestness of your try to put them there. i must be lacking something here however what attempt? i've by no means in my life tried to submit PaX upstream (for all the reasons mentioned already). the CII mails were exploratory to see how serious that whole group is about truly securing core infrastructure. in a sense i've received my answers, there's nothing more to the story. as in your free recommendation, let me reciprocate: advanced issues don't clear up themselves. code solving complex problems would not write itself. people writing code solving complicated issues are few and much between that you can see out in brief order. such people (domain specialists) don't work without spending a dime with few exceptions like ourselves. biting the hand that feeds you'll only finish you up in hunger. PS: since you're so sure about kernel builders' ability to reimplement our code, perhaps look at what parallel features i still maintain in PaX despite vanilla having a 'completely-not-reinvented-right here' implementation and take a look at to understand the explanation. or just have a look at all the CVEs that affected say vanilla's ASLR however didn't affect mine. PPS: Cassandra never wrote code, i do. criticizing the sorry state of kernel safety is a facet project when i'm bored or simply ready for the next kernel to compile (i wish LTO was more efficient).



Posted Nov 8, 2015 2:28 UTC (Solar) by jejb (subscriber, #6654) [Link]



In other words, you tried to define their process for them ... I can not suppose why that wouldn't work. > "Lies, damned lies, and statistics". The issue with ad hominem attacks is that they are singularly ineffective against a transparently factual argument. I posted a one line command anybody could run to get the number of patches you've authored within the kernel. Why don't you publish an equal that provides figures you want more? > i've by no means in my life tried to submit PaX upstream (for all the explanations discussed already). So the grasp plan is to exhibit your experience by the number of patches you haven't submitted? great plan, world domination beckons, sorry that one received away from you, however I am sure you will not let it happen once more.



Posted Nov 8, 2015 2:Fifty six UTC (Sun) by PaXTeam (visitor, #24616) [Link]



what? since when does asking a query define anything? is not that how we find out what someone else thinks? is not that what *they* have that webform (by no means mind the mailing lists) for as nicely? in different words you admit that my question was not really answered . > The issue with advert hominem attacks is that they are singularly ineffective towards a transparently factual argument. you did not have an argument to begin with, that is what i defined within the part you rigorously selected to not quote. i am not here to defend myself in opposition to your clearly idiotic attempts at proving no matter you are trying to prove, as they say even in kernel circles, code speaks, bullshit walks. you possibly can take a look at mine and decide what i can or can not do (not that you've the data to understand most of it, mind you). that mentioned, there're clearly different extra capable folks who've done so and determined that my/our work was worth something else nobody would have been feeding off of it for the past 15 years and nonetheless counting. and as unimaginable as it might appear to you, life does not revolve across the vanilla kernel, not everyone's dying to get their code in there particularly when it means to put up with such foolish hostility on lkml that you now also demonstrated right here (it's ironic how you came to the defense of josh who particularly asked individuals to not deliver that infamous lkml style here. nice job there James.). as for world domination, there're some ways to attain it and something tells me that you're clearly out of your league here since PaX has already achieved that. you are running such code that implements PaX options as we communicate.



Posted Nov 8, 2015 16:Fifty two UTC (Solar) by jejb (subscriber, #6654) [Hyperlink]



I posted the one line git script giving your authored patches in response to this authentic request by you (this one, just in case you have forgotten http://lwn.internet/Articles/663591/): > as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not properly credited)? I take it, by the way in which you've shifted floor in the earlier threads, that you simply wish to withdraw that request?



Posted Nov 8, 2015 19:31 UTC (Solar) by PaXTeam (visitor, #24616) [Link]



Posted Nov 8, 2015 22:31 UTC (Solar) by pizza (subscriber, #46) [Link]



Please provide one that is not flawed, or much less wrong. It would take much less time than you have already wasted here.



Posted Nov 8, 2015 22:49 UTC (Sun) by PaXTeam (visitor, #24616) [Link]



anyway, since it is you guys who have a bee in your bonnet, let's take a look at your stage of intelligence too. first figure out my email tackle and mission name then try to seek out the commits that say they come from there (it brought back some memories from 2004 already, how instances flies! i'm stunned i really managed to accomplish this a lot with explicitly not making an attempt, imagine if i did :). it is an incredibly advanced process so by accomplishing it you'll show yourself to be the highest dog right here on lwn, no matter that is value ;).



Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Hyperlink]



*shrug* Or don't; you're solely sullying your personal repute.



Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]



Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Hyperlink]



I would not both



Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Link]



Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (guest, #62367) [Hyperlink]



Posted Nov 8, 2015 3:38 UTC (Solar) by PaXTeam (guest, #24616) [Link]



Posted Nov 12, 2015 13:Forty seven UTC (Thu) by nix (subscriber, #2304) [Hyperlink]



Ah. I believed my reminiscence wasn't failing me. Compare to PaXTeam's response to <http: lwn.net articles 663612 />. PaXTeam just isn't averse to outright lying if it means he gets to seem proper, I see. Possibly PaXTeam's memory is failing, and this obvious contradiction is just not a brazen lie, however on condition that the 2 posts had been made within a day of each other I doubt it. (PaXTeam's complete unwillingness to assume good religion in others deserves some reflection. Yes, I *do* suppose he's mendacity by implication right here, and doing so when there's almost nothing at stake. God alone knows what he's prepared to stoop to when one thing *is* at stake. Gosh I wonder why his fixes aren't going upstream very quick.)



Posted Nov 12, 2015 14:Eleven UTC (Thu) by PaXTeam (guest, #24616) [Link]



> and that one commit you found that went in regardless of said ban additionally somebody's ban doesn't mean it's going to translate into another person's execution of that ban as it is clear from the commit in query. it is considerably unhappy that it takes a security repair to expose the fallacy of this policy although. the rest of your pithy advert hominem speaks for itself better than i ever may ;).



Posted Nov 12, 2015 15:58 UTC (Thu) by andreashappe (subscriber, #4810) [Link]



Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (guest, #67268) [Link]



I don't see this message in my mailbox, so presumably it got swallowed.



Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]



You might be conscious that it's completely potential that everyone seems to be flawed here , proper? That the kernel maintainers must focus more on safety, that the article was biased, that you are irresponsible to decry the state of safety, and do nothing to help, and that your patchsets would not help that much and are the unsuitable path for the kernel? That simply because the kernel maintainers aren't 100% proper it doesn't suggest you're?



Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (guest, #5770) [Link]



I believe you've him backwards there. Jon is evaluating this to Mindcraft as a result of he thinks that despite being unpalatable to a whole lot of the community, the article might in fact contain a number of truth.



Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Link]



Posted Nov 9, 2015 15:Thirteen UTC (Mon) by spender (guest, #23067) [Link]



"There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this could effectively be true" Just as you criticized the article for mentioning Ashley Madison though in the very first sentence of the next paragraph it mentions it did not contain the Linux kernel, you cannot give credence to conspiracy theories with out incurring the identical criticism (in other words, you can't play the Glenn Beck "I am just asking the questions here!" whose "questions" fuel the conspiracy theories of others). Very like mentioning Ashley Madison for instance for non-technical readers in regards to the prevalence of Linux in the world, if you are criticizing the point out then shouldn't likening a non-FUD article to a FUD article additionally deserve criticism, particularly given the rosy, self-congratulatory picture you painted of upstream Linux security? As the PaX Group pointed out in the initial submit, the motivations aren't hard to know -- you made no mention at all about it being the 5th in a protracted-running collection following a fairly predictable time trajectory. No, we didn't miss the general analogy you had been trying to make, we just do not suppose you'll be able to have your cake and eat it too. -Brad



Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Link]



Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Hyperlink]



It's gracious of you not to blame your readers. I determine they're a fair goal: there's that line about those ignorant of history being condemned to re-implement Unix -- as your readers are! :-) K3n.



Posted Nov 9, 2015 18:Forty three UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]



Unfortunately, I do not understand neither the "security" of us (PaXTeam/spender), nor the mainstream kernel people when it comes to their perspective. I confess I've totally no technical capabilities on any of these matters, but if they all determined to work collectively, as a substitute of getting limitless and pointless flame wars and blame sport exchanges, quite a lot of the stuff would have been finished already. And all of the whereas everyone concerned might have made one other large pile of cash on the stuff. All of them appear to want to have a better Linux kernel, so I've bought no concept what the issue is. Evidently nobody is willing to yield any of their positions even a bit bit. As an alternative, both sides look like bent on trying to insult their way into forcing the opposite aspect to hand over. Which, after all, by no means works - it just causes extra pushback. Perplexing stuff...



Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Link]



Posted Nov 9, 2015 19:Forty four UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]



Take a scientific computational cluster with an "air gap", as an illustration. You'd most likely need most of the security stuff turned off on it to achieve most efficiency, because you may trust all customers. Now take a few billion mobile phones that may be tough or slow to patch. You'd most likely wish to kill most of the exploit classes there, if these units can nonetheless run reasonably effectively with most security options turned on. So, it's not both/or. It is in all probability "it relies upon". But, if the stuff is not there for everyone to compile/use within the vanilla kernel, it will be harder to make it part of everyday selections for distributors and users.



Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Hyperlink]



How sad. This Dijkstra quote comes to thoughts immediately: Software program engineering, of course, presents itself as another worthy trigger, but that's eyewash: should you rigorously learn its literature and analyse what its devotees really do, you'll uncover that software program engineering has accepted as its charter "Learn how to program if you can not."



Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link]



I suppose that truth was too unpleasant to fit into Dijkstra's world view.



Posted Nov 7, 2015 10:Fifty two UTC (Sat) by ms (subscriber, #41272) [Hyperlink]



Certainly. And the interesting factor to me is that when I reach that point, checks usually are not enough - mannequin checking at a minimal and really proofs are the one means forwards. I'm no security expert, my field is all distributed methods. I perceive and have applied Paxos and i consider I can explain how and why it really works to anyone. But I'm at present performing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No take a look at is ample because there are infinite interleavings of occasions and my head simply couldn't cope with working on this both at the computer or on paper - I found I couldn't intuitively reason about these things at all. So I began defining the properties and wanted and step by step proving why every of them holds. With out my notes and proofs I can not even clarify to myself, not to mention anybody else, why this factor works. I find this each completely apparent that this will happen and completely terrifying - the upkeep cost of these algorithms is now an order of magnitude greater.



Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Link]



> Indeed. And the fascinating factor to me is that once I attain that time, exams aren't sufficient - mannequin checking at a minimum and really proofs are the only approach forwards. Or are you just using the fallacious maths? Hobbyhorse time once more :-) however to quote a fellow Pick developer ... "I usually stroll into a SQL improvement store and see that wall - you realize, the one with the large SQL schema that no-one fully understands on it - and marvel how I can easily hold your entire schema for a Decide database of the identical or better complexity in my head". However it's easy - by education I'm a Chemist, by interest a Bodily Chemist (and by occupation an unemployed programmer :-). And when I'm fascinated with chemistry, I can ask myself "what's an atom product of" and suppose about issues just like the sturdy nuclear force. Subsequent degree up, how do atoms stick collectively and make molecules, and think in regards to the electroweak power and electron orbitals, and how do chemical reactions occur. Then I feel about molecules stick together to make supplies, and assume about metals, and/or Van de Waals, and stuff. Point is, you could *layer* stuff, and have a look at things, and say "how can I split components off into 'black packing containers' so at anyone degree I can assume the opposite levels 'just work'". For example, with Pick a FILE (table to you) shops a class - a set of identical objects. One object per Document (row). And, same as relational, one attribute per Area (column). Are you able to map your relational tables to actuality so easily? :-) Going again THIRTY years, I remember a story about a man who built little laptop crabs, that might quite fortunately scuttle round in the surf zone. As a result of he did not try to work out how to resolve all the issues without delay - every of his (incredibly puny by right this moment's requirements - this is the 8080/Z80 era!) processors was set to simply process just a little little bit of the issue and there was no central "mind". But it surely worked ... Maybe it's best to just write a bunch of small modules to unravel every particular person drawback, and let final reply "just happen". Cheers, Wol



Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (guest, #60862) [Hyperlink]



To my understanding, this is exactly what a mathematical abstraction does. For example in Z notation we would assemble schemas for the varied modifying ("delta") operations on the base schema, and then argue about preservation of formal invariants, properties of the outcome, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A by way of O (for which they've been already argued). The result is a set of operations that, executed in arbitrary order, result in a set of properties holding for the consequence and outputs. Thus proving the formal design correct (w/ caveat lectors regarding scope, correspondence with its implementation [although that can be proven as properly], and skim-only ["xi"] operations).



Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Link]



Looking through the historical past of computing (and possibly plenty of other fields too), you will in all probability find that folks "can't see the wooden for the bushes" extra usually that not. They dive into the detail and fully miss the large picture. (Drugs, and interest of mine, suffers from that too - I remember any individual speaking concerning the consultant eager to amputate a gangrenous leg to save somebody's life - oblivious to the fact that the affected person was dying of cancer.) Cheers, Wol



Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Link]



https://www.youtube.com/watch?v=VpuVDfSXs-g (LCA 2015 - "Programming Thought of Harmful") FWIW, I think that this discuss could be very relevant to why writing safe software program is so arduous.. -Dave.



Posted Nov 7, 2015 5:49 UTC (Sat) by kunitz (subscriber, #3965) [Hyperlink]



While we are spending millions at a multitude of security issues, kernel points aren't on our top-priority checklist. Truthfully I remember solely as soon as having discussing a kernel vulnerability. The result of the evaluation has been that all our programs have been working kernels that have been older as the kernel that had the vulnerability. But "patch administration" is a real challenge for us. Software should proceed to work if we set up security patches or replace to new releases due to the top-of-life policy of a vendor. The revenue of the company is depending on the IT programs operating. So "not breaking user space" is a safety function for us, because a breakage of one component of our several ten hundreds of Linux systems will stop the roll-out of the safety replace. Another drawback is embedded software or firmware. Nowadays nearly all hardware techniques embody an working system, typically some Linux model, offering a fill community stack embedded to help distant management. Commonly those methods do not survive our obligatory safety scan, because distributors still didn't update the embedded openssl. The true challenge is to offer a software program stack that may be operated in the hostile setting of the Web sustaining full system integrity for ten years and even longer without any customer maintenance. The present state of software engineering will require help for an automatic update process, but distributors should understand that their enterprise mannequin must have the ability to finance the resources providing the updates. General I'm optimistic, networked software is just not the first know-how utilized by mankind causing issues that have been addressed later. Steam engine use could result in boiler explosions however the "engineers" had been in a position to scale back this danger significantly over just a few a long time.



Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Link]



The following is all guess work; I'd be eager to know if others have evidence either a technique or one other on this: The people who learn how to hack into these systems by means of kernel vulnerabilities know that they expertise they've learnt have a market. Thus they do not are inclined to hack in an effort to wreak havoc - certainly on the entire the place data has been stolen to be able to launch and embarrass people, it _appears_ as though those hacks are by way of much easier vectors. I.e. lesser expert hackers find there's a whole load of low-hanging fruit which they will get at. They don't seem to be being paid forward of time for the info, so that they flip to extortion as an alternative. They don't cover their tracks, and they will usually be found and charged with criminal offences. So in case your security meets a certain basic degree of proficiency and/or your company isn't doing anything that places it near the highest of "companies we'd prefer to embarrass" (I believe the latter is far simpler at keeping programs "secure" than the former), then the hackers that get into your system are likely to be skilled, paid, and doubtless not going to do a lot injury - they're stealing knowledge for a competitor / state. So that doesn't hassle your backside line - at the very least not in a method which your shareholders will be aware of. So why fund security?



Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (visitor, #82661) [Link]



On the other hand, some effective mitigation in kernel stage could be very useful to crush cybercriminal/skiddie's strive. If one in all your customer working a future trading platform exposes some open API to their shoppers, and if the server has some reminiscence corruption bugs could be exploited remotely. Then you already know there are identified attack methods( corresponding to offset2lib) can help the attacker make the weaponized exploit a lot easier. Will you clarify the failosophy "A bug is bug" to your customer and inform them it'd be okay? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp. To the most industrial makes use of, more safety mitigation inside the software program won't value you extra funds. You'll still need to do the regression check for each upgrade.



Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Link]



Needless to say I focus on exterior net-based penetration-assessments and that in-home tests (native LAN) will doubtless yield totally different results.



Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Hyperlink]



I keep reading this headline as "a new Minecraft moment", and considering that possibly they've decided to comply with up the .Internet thing by open-sourcing Minecraft. Oh well. I mean, safety is nice too, I suppose.



Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]



Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_each (subscriber, #28989) [Link]



Posted Nov 8, 2015 10:34 UTC (Solar) by jcm (subscriber, #18262) [Hyperlink]



Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]



Posted Nov 9, 2015 15:Fifty three UTC (Mon) by neiljerram (subscriber, #12005) [Link]



(Oh, and I was also still wondering how Minecraft had taught us about Linux efficiency - so because of the other remark thread that pointed out the 'd', not 'e'.)



Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (visitor, #4654) [Hyperlink]



I would similar to to add that in my opinion, there is a basic downside with the economics of pc security, which is especially seen at present. Two issues even possibly. First, the money spent on laptop security is usually diverted in direction of the so-referred to as security "circus": quick, straightforward options which are primarily selected just with a view to "do one thing" and get better press. It took me a very long time - possibly a long time - to claim that no safety mechanism in any respect is better than a foul mechanism. But now I firmly believe in this attitude and would somewhat take the danger knowingly (provided that I can save cash/resource for myself) than take a foul method at fixing it (and haven't any money/resource left when i understand I ought to have executed one thing else). And that i find there are a lot of unhealthy or incomplete approaches at the moment available in the computer safety subject. These spilling our uncommon cash/resources on ready-made ineffective instruments should get the bad press they deserve. And, we definitely need to enlighten the press on that as a result of it isn't really easy to understand the effectivity of protection mechanisms (which, by definition, should prevent issues from taking place). Second, and which may be newer and more worrying. The circulate of cash/resource is oriented within the path of assault tools and vulnerabilities discovery much greater than in the path of recent safety mechanisms. This is particularly worrying as cyber "protection" initiatives look increasingly like the standard idustrial initiatives geared toward producing weapons or intelligence techniques. Furthermore, dangerous useless weapons, as a result of they're only working in opposition to our very weak current programs; and bad intelligence programs as even fundamental college-level encryption scares them all the way down to ineffective. Nevertheless, all of the ressources are for these adult teenagers playing the white hat hackers with not-so-troublesome programming methods or community monitoring or WWI-level cryptanalysis. And now also for the cyberwarriors and cyberspies which have but to show their usefulness totally (particularly for peace protection...). Personnally, I would fortunately depart them all of the hype; but I'll forcefully declare that they don't have any right in any respect on any of the budget allocation selections. Solely those working on safety ought to. And yep, it means we should resolve the place to place there resources. We've got to claim the unique lock for ourselves this time. (and I guess the PaXteam could be amongst the first to profit from such a change). Whereas desirous about it, I wouldn't even depart white-hat or cyber-guys any hype ultimately. That's extra publicity than they deserve. I crave for the day I'll learn in the newspaper that: "Another of those in poor health advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well-known virus program code exploiting a programmer mistake and managed nevertheless to carry a type of unfinished and bad high quality programs, X, that we're all obliged to use to its knees, annoying hundreds of thousands of normal customers along with his unfortunate cyber-vandalism. All of the safety specialists unanimously recommend that, once again, the price range of the cyber-command be retargetted, or a minimum of leveled-off, so as to carry extra safety engineer positions in the academic area or civilian business. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional in this affair."



Hmmm - cyber-hooligans - I like the label. Though it does not apply well to the battlefield-oriented variant. Memes Rain



Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Hyperlink]



The state of 'software safety trade' is a f-ng catastrophe. Failure of the very best order. There is very large quantities of money that goes into 'cyber security', however it is often spent on authorities compliance and audit efforts. This means as an alternative of actually putting effort into correcting issues and mitigating future issues, nearly all of the trouble goes into taking present applications and making them conform to committee-pushed pointers with the minimal quantity of effort and modifications. Some level of regulation and standardization is absolutely needed, however lay individuals are clueless and are completely unable to discern the difference between anyone who has beneficial experience versus some company that has spent hundreds of thousands on slick advertising and marketing and 'native advertising' on large websites and computer magazines. The individuals with the money sadly solely have their very own judgment to rely on when shopping for into 'cyber safety'. > These spilling our rare money/sources on ready-made useless instruments should get the bad press they deserve. There is no such thing as a such factor as 'our uncommon cash/resources'. You have got your money, I've mine. Cash being spent by some company like Redhat is their money. Cash being spent by governments is the government's cash. (you, actually, have way more management in how Walmart spends it's cash then over what your government does with their's) > This is very worrying as cyber "protection" initiatives look increasingly like the same old idustrial initiatives aimed at producing weapons or intelligence programs. Moreover, unhealthy useless weapons, as a result of they're only working in opposition to our very vulnerable current methods; and bad intelligence systems as even basic school-stage encryption scares them all the way down to useless. Having secure software with robust encryption mechanisms within the arms of the public runs counter to the interests of most major governments. Governments, like every other for-revenue group, are primarily enthusiastic about self-preservation. Money spent on drone initiatives or banking auditing/oversight regulation compliance is Much more precious to them then attempting to help the public have a safe mechanism for making phone calls. Particularly when those safe mechanisms interfere with information collection efforts. Sadly you/I/us can't rely on some magical benefactor with deep pockets to sweep in and make Linux higher. It's simply not going to happen. Corporations like Redhat have been massively useful to spending sources to make Linux kernel extra succesful.. however they are pushed by a the necessity to turn a revenue, which suggests they should cater on to the the type of requirements established by their buyer base. Customers for EL are typically far more targeted on lowering costs associated with administration and software program development then safety on the low-stage OS. Enterprise Linux clients are likely to rely on physical, human policy, and network security to guard their 'tender' interiors from being uncovered to external threats.. assuming (rightly) that there's little or no they'll do to actually harden their programs. In reality when the selection comes between security vs convenience I am positive that the majority prospects will happily defeat or strip out any security mechanisms introduced into Linux. On top of that when most Enterprise software is extremely dangerous. So much so that 10 hours spent on bettering a web front-finish will yield extra actual-world security advantages then a a thousand hours spent on Linux kernel bugs for most businesses. Even for 'regular' Linux customers a security bug of their Firefox's NAPI flash plugin is much more devastating and poses a massively greater risk then a obscure Linux kernel buffer over circulation drawback. It is simply not really necessary for attackers to get 'root' to get entry to the necessary information... generally all of which is contained in a single user account. Ultimately it is as much as individuals like you and myself to put the hassle and money into enhancing Linux security. For both ourselves and different individuals.



Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (visitor, #4654) [Hyperlink]



Spilling has at all times been the case, however now, to me and in laptop security, most of the money appears spilled as a result of unhealthy religion. And this is mostly your cash or mine: either tax-fueled governemental assets or company costs which might be directly reimputed on the costs of products/software program we are advised we're *obliged* to buy. (Look at company firewalls, residence alarms or antivirus software program advertising discourse.) I think it's time to level out that there are a number of "malicious malefactors" round and that there's an actual need to establish and sanction them and confiscate the assets they have by some means managed to monopolize. And that i do *not* assume Linus is among such culprits by the best way. But I think he could also be among those hiding their heads within the sand in regards to the aforementioned evil actors, while he in all probability has more leverage to counteract them or oblige them to reveal themselves than many of us. I find that to be of brown-paper-bag level (although head-in-the-sand is somehow a new interpretation). In the end, I believe you might be right to say that at present it's solely as much as us individuals to strive actually to do something to improve Linux or laptop security. However I nonetheless think that I'm right to say that this is not regular; particularly while some very critical people get very critical salaries to distribute randomly some troublesome to evaluate budgets. [1] A paradoxical state of affairs once you give it some thought: in a website where you are before everything preoccupied by malicious individuals everybody ought to have factual, transparent and honest habits as the primary priority in their thoughts.



Posted Nov 9, 2015 15:Forty seven UTC (Mon) by MarcB (subscriber, #101804) [Link]



It even has a pleasant, seven line Basic-pseudo-code that describes the current situation and clearly shows that we are caught in an limitless loop. It does not reply the large question, although: How to write down better software. The unhappy thing is, that this is from 2005 and all the issues that have been clearly silly concepts 10 years ago have proliferated even more.



Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (visitor, #4654) [Hyperlink]



Word IMHO, we should investigate additional why these dumb issues proliferate and get so much help. If it is only human psychology, nicely, let's fight it: e.g. Mozilla has shown us that they'll do fantastic issues given the right message. If we're facing lively folks exploiting public credulity: let's establish and struggle them. However, extra importantly, let's capitalize on this information and secure *our* methods, to show off at a minimal (and more later on in fact). Your reference conclusion is particularly nice to me. "problem [...] the standard knowledge and the status quo": that job I might happily settle for.



Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Hyperlink]



That rant is itself a bunch of "empty calories". The converse to the objects it rants about, which it's suggesting at some degree, can be as unhealthy or worse, and indicative of the worst sort of safety considering that has put a lot of people off. Alternatively, it is only a rant that provides little of value. Personally, I think there is not any magic bullet. Security is and always has been, in human history, an arms race between defenders and attackers, and one that's inherently a trade-off between usability, dangers and prices. If there are errors being made, it is that we should in all probability spend extra sources on defences that would block total lessons of attacks. E.g., why is the GRSec kernel hardening stuff so laborious to apply to common distros (e.g. there's no reliable source of a GRSec kernel for Fedora or RHEL, is there?). Why does all the Linux kernel run in one security context? Why are we still writing numerous software in C/C++, typically without any basic security-checking abstractions (e.g. basic bounds-checking layers in between I/O and parsing layers, say)? Can hardware do extra to offer security with pace? No doubt there are lots of individuals engaged on "block classes of attacks" stuff, the question is, why aren't there extra sources directed there?



Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Hyperlink]



>There are loads of the reason why Linux lags behind in defensive safety applied sciences, however one among the key ones is that the companies earning money on Linux haven't prioritized the development and integration of those applied sciences. This seems like a motive which is actually worth exploring. Why is it so? I think it's not obvious why this would not get some extra attention. Is it attainable that the folks with the cash are proper to not more highly prioritise this? Afterall, what curiosity have they got in an unsecure, exploitable kernel? Where there may be common cause, linux improvement gets resourced. It has been this manner for many years. If filesystems qualify for frequent interest, certainly safety does. So there doesn't appear to be any obvious reason why this difficulty doesn't get more mainstream attention, besides that it truly already will get sufficient. It's possible you'll say that disaster has not struck yet, that the iceberg has not been hit. However it seems to be that the linux development process is just not overly reactive elsewhere.



Posted Nov 10, 2015 15:Fifty three UTC (Tue) by raven667 (subscriber, #5198) [Link]



That's an attention-grabbing question, definitely that's what they actually consider regardless of what they publicly say about their commitment to safety applied sciences. What's the truly demonstrated downside for Kernel developers and the organizations that pay them, as far as I can inform there is not enough consequence for the lack of Security to drive extra funding, so we are left begging and cajoling unconvincingly.



Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (visitor, #4654) [Link]



The important thing situation with this area is it relates to malicious faults. So, when consequences manifest themselves, it is just too late to act. And if the current commitment to an absence of voluntary technique persists, we are going to oscillate between phases of relaxed inconscience and anxious paranoia. Admittedly, kernel developpers seem pretty resistant to paranoia. That is an effective thing. But I'm waiting for the times the place armed land-drones patrol US streets within the vicinity of their youngsters faculties for them to discover the feeling. They aren't so distants the days when innocent lives will unconsciouly depend on the safety of (linux-based) pc programs; under water, that's already the case if I remember appropriately my last dive, as well as in several current vehicles according to some reports.



Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Hyperlink]



Traditional internet hosting firms that use Linux as an exposed front-end system are retreating from growth while HPC, cell and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel of their instructions. This is really not that stunning: For internet hosting wants the kernel has been "finished" for quite a while now. In addition to assist for current hardware there isn't much use for newer kernels. Linux 3.2, or even older, works just high-quality. Hosting does not need scalability to a whole lot or 1000's of CPU cores (one makes use of commodity hardware), complicated instrumentation like perf or tracing (programs are locked down as a lot as possible) or advanced power-management (if the system does not have constant excessive load, it is not making enough cash). So why should internet hosting companies nonetheless make sturdy investments in kernel improvement? Even if they'd one thing to contribute, the hurdles for contribution have turn into higher and higher. For his or her safety needs, hosting firms already use Grsecurity. I don't have any numbers, but some experience means that Grsecurity is basically a hard and fast requirement for shared hosting. On the other hand, kernel safety is sort of irrelevant on nodes of an excellent laptop or on a system running giant enterprise databases which might be wrapped in layers of middle-ware. And cell vendors simply don't care.



Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]



Linking



Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Link]



Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Hyperlink]



The assembled doubtless recall that in August 2011, kernel.org was root compromised. I am sure the system's arduous drives had been despatched off for forensic examination, and we've all been ready patiently for the answer to an important query: What was the compromise vector? From shortly after the compromise was discovered on August 28, 2011, right by means of April 1st, 2013, kernel.org included this notice at the highest of the positioning Information: 'Due to all in your endurance and understanding throughout our outage and please bear with us as we bring up the different kernel.org techniques over the next few weeks. We can be writing up a report on the incident in the future.' (Emphasis added.) That comment was removed (together with the rest of the site News) during a Might 2013 edit, and there hasn't been -- to my information -- a peep about any report on the incident since then. This has been disappointing. When the Debian Challenge discovered sudden compromise of several of its servers in 2007, Wichert Akkerman wrote and posted a superb public report on precisely what happened. Likewise, the Apache Basis likewise did the best factor with good public autopsies of the 2010 Internet site breaches. Arstechnica's Dan Goodin was still attempting to follow up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years ago. He wrote: Linux developer and maintainer Greg Kroah-Hartman told Ars that the investigation has yet to be accomplished and gave no timetable for when a report is likely to be released. [...] Kroah-Hartman also told Ars kernel.org methods had been rebuilt from scratch following the attack. Officials have developed new tools and procedures since then, however he declined to say what they're. "There might be a report later this year about site [sic] has been engineered, however don't quote me on when will probably be launched as I'm not accountable for it," he wrote. Who's responsible, then? Is anyone? Anyone? Bueller? Or is it a state secret, or what? Two years since Greg Ok-H stated there can be a report 'later this 12 months', and 4 years since the meltdown, nothing yet. How about some data? Rick Moen [email protected]



Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (guest, #4654) [Hyperlink]



Less significantly, note that if even the Linux mafia does not know, it must be the venusians; they are notoriously stealth of their invasions.



Posted Nov 14, 2015 12:46 UTC (Sat) by error27 (subscriber, #8346) [Hyperlink]



I know the kernel.org admins have given talks about a few of the brand new protections that have been put into place. There are no more shell logins, as an alternative everything uses gitolite. The different companies are on different hosts. There are more kernel.org staff now. Individuals are using two issue identification. Some other stuff. Do a seek for Konstantin Ryabitsev.



Posted Nov 14, 2015 15:Fifty eight UTC (Sat) by rickmoen (subscriber, #6943) [Link]



I beg your pardon if I was somehow unclear: That was stated to have been the path of entry to the machine (and that i can readily believe that, as it was additionally the precise path to entry into shells.sourceforge.net, a few years prior, around 2002, and into many other shared Web hosts for many years). But that isn't what is of major interest, and is not what the forensic examine long promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator within the August 2011 Dan Goodin article you cited: 'How they managed to take advantage of that to root entry is presently unknown and is being investigated'. Ok, people, you've got now had 4 years of investigation. What was the trail of escalation to root? (Additionally, different details that may logically be coated by a forensic study, resembling: Whose key was stolen? Who stole the important thing?) That is the type of autopsy was promised prominently on the entrance web page of kernel.org, to reporters, and elsewhere for a very long time (after which summarily removed as a promise from the entrance web page of kernel.org, with out remark, along with the remainder of the site Information section, and apparently dropped). It still would be applicable to know and share that information. Particularly the datum of whether the path to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen [email protected]



Posted Nov 22, 2015 12:Forty two UTC (Sun) by rickmoen (subscriber, #6943) [Hyperlink]



I've completed a better evaluation of revelations that got here out quickly after the break-in, and assume I've found the answer, via a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell customers (two days before the public was informed), plus Aug. 31st feedback to The Register's Dan Goodin by 'two safety researchers who had been briefed on the breach': Root escalation was through exploit of a Linux kernel security gap: Per the two security researchers, it was one each extremely embarrassing (vast-open entry to /dev/mem contents including the working kernel's picture in RAM, in 2.6 kernels of that day) and identified-exploitable for the prior six years by canned 'sploits, one among which (Phalanx) was run by some script kiddie after entry utilizing stolen dev credentials. Different tidbits: - Site admins left the basis-compromised Web servers operating with all companies still lit up, for a number of days. - Site admins and Linux Basis sat on the knowledge and failed to tell the public for those same multiple days. - Site admins and Linux Foundation have by no means revealed whether trojaned Linux source tarballs have been posted in the http/ftp tree for the 19+ days earlier than they took the positioning down. (Yes, git checkout was fantastic, however what in regards to the 1000's of tarball downloads?) - After promising a report for several years and then quietly removing that promise from the front web page of kernel.org, Linux Basis now stonewalls press queries.I posted my finest attempt at reconstructing the story, absent a real report from insiders, to SVLUG's important mailing checklist yesterday. (Essentially, there are surmises. If the people with the information were extra forthcoming, we might know what happened for certain.) I do have to marvel: If there's another embarrassing screwup, will we even be instructed about it in any respect? Rick Moen [email protected]



Posted Nov 22, 2015 14:25 UTC (Sun) by spender (guest, #23067) [Hyperlink]



Additionally, it is preferable to make use of dwell reminiscence acquisition prior to powering off the system, otherwise you lose out on memory-resident artifacts that you would be able to carry out forensics on. -Brad



How concerning the lengthy overdue autopsy on the August 2011 kernel.org compromise?



Posted Nov 22, 2015 16:28 UTC (Solar) by rickmoen (subscriber, #6943) [Link]



Thanks on your comments, Brad. I would been counting on Dan Goodin's declare of Phalanx being what was used to achieve root, in the bit the place he cited 'two security researchers who were briefed on the breach' to that impact. Goodin additionally elaborated: 'Fellow security researcher Dan Rosenberg stated he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the primary time I've heard of a rootkit being claimed to be bundled with an assault instrument, and that i noted that oddity in my posting to SVLUG. That having been mentioned, yeah, the Phalanx README doesn't specifically declare this, so then maybe Goodin and his several 'security researcher' sources blew that detail, and no one however kernel.org insiders but is aware of the escalation path used to gain root. Also, it's preferable to make use of stay memory acquisition previous to powering off the system, otherwise you lose out on memory-resident artifacts that you could carry out forensics on. Arguable, but a tradeoff; you possibly can poke the compromised stay system for state information, however with the drawback of leaving your system running below hostile control. I used to be at all times taught that, on balance, it's higher to pull power to finish the intrusion. Rick Moen [email protected]



Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (guest, #88005) [Hyperlink]



Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Hyperlink]



With "one thing" you imply those who produce those closed supply drivers, right? If the "shopper product corporations" just stuck to utilizing parts with mainlined open source drivers, then updating their products could be much easier.



A new Mindcraft moment?



Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Link]



They have ring 0 privilege, can access protected reminiscence straight, and cannot be audited. Trick a kernel into working a compromised module and it is game over. Even tickle a bug in a "good" module, and it is most likely game over - in this case fairly literally as such modules tend to be video drivers optimised for games ...