Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Power Security

Lessons From the Cyberattack On India's Largest Nuclear Power Plant (thebulletin.org) 113

Dan Drollette shares an article by two staffers at the Center for Global Security Research at Lawrence Livermore National Laboratory from The Bulletin of Atomic Scientists.

"Indian officials acknowledged on October 30th that a cyberattack occurred at the country's Kudankulam nuclear power plant," they write, adding that "According to last Monday's Washington Post, Kudankulam is India's biggest nuclear power plant, 'equipped with two Russian-designed and supplied VVER pressurized water reactors with a capacity of 1,000 megawatts each.'"

So what did we learn? While reactor operations at Kudankulam were reportedly unaffected, this incident should serve as yet another wake-up call that the nuclear power industry needs to take cybersecurity more seriously. There are worrying indications that it currently does not: A 2015 report by the British think tank Chatham House found pervasive shortcomings in the nuclear power industry's approach to cybersecurity, from regulation to training to user behavior. In general, nuclear power plant operators have failed to broaden their cultures of safety and security to include an awareness of cyberthreats. (And by cultures of safety and security, those in the field -- such as the Fissile Materials Working Group -- refer to a broad, all-embracing approach towards nuclear security, that takes into account the human factor and encompasses programs on personnel reliability and training, illicit trafficking interception, customs and border security, export control, and IT security, to name just a few items. The Hague Communique of 2014 listed nuclear security culture as the first of its three pillars of nuclear security, the other two being physical protection and materials accounting.)

This laxness might be understandable if last week's incident were the first of its kind. Instead, there have been over 20 known cyber incidents at nuclear facilities since 1990. This number includes relatively minor items such as accidents from software bugs and inadequately tested updates along with deliberate intrusions, but it demonstrates that the nuclear sector is not somehow immune to cyber-related threats. Furthermore, as the digitalization of nuclear reactor instrumentation and control systems increases, so does the potential for malicious and accidental cyber incidents alike to cause harm.

This record should also disprove the old myth, unfortunately repeated in Kudankulam officials' remarks, that so-called air-gapping effectively secures operational networks at plants. Air-gapping refers to separating the plant's internet-connected business networks from the operational networks that control plant processes; doing so is intended to prevent malware from more easily infected business networks from affecting industrial control systems. The intrusion at Kudankulam so far seems limited to the plant's business networks, but air gaps have failed at the Davis-Besse nuclear power plant in Ohio in 2003 and even classified U.S. military systems in 2008. The same report from Chatham House found ample sector-wide evidence of employee behavior that would circumvent air gaps, like charging personal phones via reactor control room USB slots and installing remote access tools for contractors... [R]evealing the culprits and motives associated with the Kudankulam attack matters less for the nuclear power industry than fixing the systemic lapses that enabled it in the first place.

"The good news is that solutions abound..." the article concludes, noting guidance, cybersecurity courses, technical exchanges, and information through various security-minded public-private partnerships. "The challenge now is integrating this knowledge into the workforce and maintaining it over time...

"But last week's example of a well-established nuclear power program responding to a breach with denial, obfuscation, and shopworn talk of so-called 'air-gaps' demonstrates how dangerously little progress the industry has made to date."
This discussion has been archived. No new comments can be posted.

Lessons From the Cyberattack On India's Largest Nuclear Power Plant

Comments Filter:
  • Wait, What? (Score:5, Insightful)

    by Waffle Iron ( 339739 ) on Saturday November 16, 2019 @10:49PM (#59421746)

    employee behavior that would circumvent air gaps, like charging personal phones via reactor control room USB slots

    Reactor control rooms have USB slots? WTF?

    • They need to export data, and of course, install upgrades.

      You might argue that the need to install upgrades is a serious problem, and you would be right, but the industry doesn't care.
      • They need to export data, and of course, install upgrades.

        You might argue that the need to install upgrades is a serious problem, and you would be right, but the industry doesn't care.

        A proprietary socket interface would solve this problem.

        What else can we solve here today?

      • What does any of that have to do with these so-called "USB Slots" whatever they are? And how does "USB Slots" in the control room have anything to do with "export data" or "install upgrades"? Clearly you have never heard of this 1970's technology called a network ...

        • by mlyle ( 148697 )

          The machines in question are air-gapped. Only carefully vetted data is supposed to be carried back and forth, with no permanent connection. But if the way this happens is supposed to be through trusted USB devices plugged into USB ports, it's bad if untrusted devices with network connections are plugged into those USB ports.

          • And why would this be occurring in the control room? And who would be doing it do you suppose? The Operators? Hahahehehehhohoho.

            Operators operate the plant. They are neither charged with nor responsible for "exporting data" or "installing upgrades. Someone has no concept of what a Process Operator (as in the Console operator) actually does or is paid to do.

          • I think you misunderstand the term "air-gapped". The consoles cannot be "air-gapped" -- they need to communicate with the control systems and instrumentation. The entire process control environment may be "air-gapped" from the "entertainment" network (alternatively called the "business" network) where nothing much of consequence occurs, but the console stations cannot be isolated from the control system -- otherwise there would be no point in having them all, would there be?

          • equipped with two Russian-designed and supplied VVER pressurized water reactors

            Well at least we know the attackers weren't Russians this time.

      • > They need to export data, and of course, install upgrades.

        eSATA might be a better option for those who want a smaller attack surface.

    • The cheap computers that run them do.

      • Oh, so a "USB Slot" means a "USB Port", and in particular an accessible USB Port on a console computer. This seems rather far-fetched to me. However I suppose it is possible that there are idiots who do that sort of thing.

        • Oh, so a "USB Slot" means a "USB Port", and in particular an accessible USB Port on a console computer. This seems rather far-fetched to me. However I suppose it is possible that there are idiots who do that sort of thing.

          You mean people who get hung up on obvious things that have nothing to do with the conversation? Or get enrages when they see a thesaurus?

          Most of us instantly figure out that USB slots are USB Ports. Or USB connectors, or USB plugs, or USB sockets. I'll use USB* so as not to trigger ya.

          USB* has long been known as an attack vector. I've seen it in action. It is pretty easy to gain access to a USB* input. One example is: Go to a trade show and get a freebee USB Flash Drive, AKA Thumb drive, AKA Geek Stick

        • The USB slot is the slot in which the USB device is plugged into the USB port. The USB port is the entire USB hardware including the silicon. A USB port can be USB 1.0, 1.1, 2.0, 3.0 etcetera. For a given port type they have different slots of type A, B, C, etc.

          "However I suppose it is possible that there are idiots who do that sort of thing.

          Oh, the irony of you calling anyone an idiot.

          • And why would you say that? I guess you believe that the USB "slots" on HMI computers should be available and functional for random passers'by to plug things into? Perhaps you should be joining the league of idiots, as only idiots hold such beliefs.

            • I guess you believe that the USB "slots"

              There are no quotes around slots you fucking moron. You plug USB devices into the slot of the USB port. Learn to shut the fuck up when you've made yourself look like an idiot if you aren't man enough to own up to your stupidity.

        • This is the hill you want to die on? USB Slot vs USB Port?

      • There's nothing cheap about the computers provided as operator stations. They are full of completely pointless hardware. Oh and they don't run reactors. They only provide input to the very expensive "not at all anything like a computer" control system that run reactors.

    • by gweihir ( 88907 )

      Reactor control rooms have USB slots? WTF?

      The article did talk about this being flaws in the mind-set. But yes, they do. How would you upgrade the firmware on such a thing otherwise, or remove log data? It is air-gaped, remember? However, these USB slots should be behind two-lock covers that can only be opened with special procedures and absolute prohibitions to connect anything but the intended devices.

      Side note: I once came to know that some electrician had connected his phone to an USB port on a secure server to charge it in a server room where

      • The Console Stations are not air-gapped from the Control Network. The Control Network is air-gapped from the entertainment (business) network. There is no need for USB Access to a console station when it is being used as a Operations station.

        • by gweihir ( 88907 )

          Talk about stating the obvious and missing the point. If there are USB ports, then they are either needed or somebody screwed up massively by not closing them up, nicely illustrating the point of the article.

          • Why are you assuming there are USB ports? Where are you imagining they are?

            Nuclear plant cyber security (I am speaking of the plant systems, NOT the admin business networks) is way ahead of most other industries. They have thought of everything discussed on this thread and ten times more. Unfortunately, we'll always have these Dunning-Kruger level posts based on misinformed articles.
            • by gweihir ( 88907 )

              From the story: " like charging personal phones via reactor control room USB slots"....

              • So? Go to the corner store and buy a USB power adapter and have the System/Instrumentation Engineer plug it into an outlet for you. You now have a USB port in the Control Room that can be used for "like charging personal phones via reactor control room USB slots" ...

                The article (which was a referent to a study by some bozo's and not related to the incident that is the subject of the article) merely specified that there was a "USB slot" located in the control room and that someone used it to charge up thei

                • by gweihir ( 88907 )

                  You have no idea what you are talking about. In an IT security analysis, an USB port of concern is most certainly one connected to a system under evaluation. It is a standard evaluation item treated in a standard way. Obviously you have never seen such an evaluation report.

      • by AmiMoJo ( 196126 )

        Updating the firmware on your nuclear plant should be a major, carefully controlled and monitored operation. Replacing a HDD seems like a better way to do it, or even replacing the whole control PC. After all you want to stage it and test it out first before putting it in a production system.

        For logs I'd suggest having data spat out over a unidirectional RS232 link onto a logging machine. Even if it gets p0wned it can't do anything.

        • Updating the firmware on your nuclear plant should be a major, carefully controlled and monitored operation.

          General statements like this demonstrate ignorance of nuclear control systems. There are many control systems at a nuclear plant. There are highly secured, segmented, and isolated safety control systems (not just one, but multiple ones), there are more elaborate turbine control systems that are integrated with other production systems, and there are various monitoring systems that are much less critical and provide data throughout the plant.

          Critical safety systems are actually based on quite simple logic

          • by AmiMoJo ( 196126 )

            I do this kind of thing (not nuclear, but life critical safety systems) for a living. I write the firmware and sometimes manage deployment.

            You may think that the system is segmented and and isolated and you can safely update one part from a USB flash drive at little risk to anything, but you are wrong. All those systems interact in ways that can be difficult to predict. Even if something seems trivial it may end up being critical, e.g. a particular display is referenced in the emergency operations and if it

            • I do this kind of thing (not nuclear, but life critical safety systems) for a living. I write the firmware and sometimes manage deployment.

              You may think that the system is segmented and and isolated and you can safely update one part from a USB flash drive at little risk to anything, but you are wrong. All those systems interact in ways that can be difficult to predict. Even if something seems trivial it may end up being critical, e.g. a particular display is referenced in the emergency operations and if it happens to be not working when the operator gets to that step you have a major problem.

              Don't tell me it's all automated. If you are 100% reliant on automated systems with no manual backup you are screwed. That was part of the problem at Fukushima and Chernobyl - loss of instrumentation and automated safety systems.

              Even if safety logic shouldn't change that doesn't mean there won't be a need to update the firmware. Over and over again we have seen that systems thought to be safe were not and needed to be altered experience revealed flaws. Simple locks and access controls are also inadequate, you should be using signed binaries at the very least.

              I really hope you are not in charge of nuclear safety.

              Once again, you demonstrate your ignorance of nuclear systems. Your assumptions they are similar to what you are doing is based on ignorance.

              Display errors are not 'major problems' because there are redundant methods to validate operational parameters. Demonstrating your ignorance again.

              Fukushima's problem had nothing to do with cybersecurity and everything to do with a tsunami deluge hitting a plant that was not designed to withstand it. More demonstrable ignorance on your part.

              Nuclear systems are

              • by AmiMoJo ( 196126 )

                At least read my posts properly before responding.

                Fukushima's meltdowns could have been avoided if it had not been for the general confusion in the immediate aftermath brought on the failure of monitoring systems and lack of manual backups.

                • Your statement makes no sense and has no basis. A clear attempt to deflect from the fact you know nothing about nuclear control systems, which is quite clear from your posts. Why keep going with your Dunning-Kruger level of insight?
            • "You may think that the system is segmented and and isolated and you can safely update one part from a USB flash drive at little risk to anything, but you are wrong. All those systems interact in ways that can be difficult to predict. Even if something seems trivial it may end up being critical, e.g. a particular display is referenced in the emergency operations and if it happens to be not working when the operator gets to that step you have a major problem."

              If the data is displayed on an operator display (

              • by AmiMoJo ( 196126 )

                An example of a safety system with a display is a dosimeter. Famously one of the issues at both Chernobyl and Fukushima was lack of suitable dosimeters for the staff, making it harder for them to assess the situation and prevent it from getting worse.

          • Critical safety systems are actually based on quite simple logic. They should almost never need to be updated as the safety logic doesn't change. There could be system upgrades, they are planned. The systems are in limited access rooms, inside locked cabinets. Work orders are carefully planned. Multiple checks, one person cannot perform any such task independently, every step is double checked. And if any potential cyber asset is involved, cyber security expert reviews and places necessary controls in place.

            In a Russian-designed power plant installed in India?

            Russian attitudes towards such elaborate procedures are legendary, and Indian attitudes are, if anything, a step down from there.

            Tell us again about your extensive experience in the reactor business in the Western world that is laughably divorced from what happens in the Near East. Oh, I see you did. Carry on.

    • employee behavior that would circumvent air gaps, like charging personal phones via reactor control room USB slots

      Reactor control rooms have USB slots? WTF?

      There are business networks in plants, completely separate from the control systems. A work laptop might have a usb port. But it doesn't connect to the plant. You certainly can't plug a usb stick into a safety control system. The article is written by an ignorant author.

      • You certainly can't plug a usb stick into a safety control system.

        I guess you've never looked at the shape of the port on the bottom of the main processor modules of Schneider's Nuclear 1E certified safety systems. Or looked at the very normal computers that are connected to the TSAA network that controls it. Or for that matter any part of any modern control system.

        USB is everywhere. On the operator consoles. On the engineering workstations. On the support systems. And even on the control systems and safety systems directly.

        The article is written by an ignorant author.

        No where near as bad as your post.

    • Reactor control rooms have USB slots? WTF?

      You may be surprised that nuclear reactors are controlled by these things called "computers". A "computer" is a device not unlike the thing you are using to read this right now, they have mice (USB), keyboards (USB), and require things like software patches (often delivered via USB).

      • No, the control systems are proprietary and designed specifically for purpose. The Supervisory systems often run on commodity computers running commodity Operating Systems, but these devices do not actually "control" anything.

  • Did they make sure to install the latest updates? This is a critical part of the security posture at all nuclear power plants.
    • The best practice is to never install updates.

      Thus as something works today, so it will work tomorrow. For every instance of today and tomorrow until the heat death of the Multiverse.

      The "entertainment" systems (what you would call the Business Network, the one run by the IT 1 D 10 T folks) because it does not really matter whether that stuff works or not since it has zero impact on the real world. They can afford multi-week loss of view and loss of control since they have nothing of import to view and no

    • Did they make sure to install the latest updates? This is a critical part of the security posture at all nuclear power plants.

      Malware was found on the business network, it has nothing to do with the plant control network. This entire article is based on ignorance of what actually ocurred.

      • Indeed! And the Business Network is an "Inherently Malicious External Network" operated by Clowns from the perspective of the Control Network operators.

  • by phantomfive ( 622387 ) on Saturday November 16, 2019 @11:00PM (#59421766) Journal
    Air gap works if you implement it correctly. If you implement it poorly, then it's still better than any other security measure implemented poorly. To begin with, using USB to transfer files is a mistake. There are so many other options that work better, and one of them should be used. In fact, no air-gap exploit would have succeeded to date, if it weren't for USB.

    Secondly, these kind of scare stories are driving some kind of agenda. I don't know what that agenda is, but the nuclear power plant wasn't breached, according to the article.
    • The agenda is likely to scare people straight. I was teaching a seminar a couple years ago, and one of the attendees was sharing how proud he was and the commendations he received for his “innovative” use of a raspberry pi to avoid a costly PLC replacement in a critical environment.

      Explaining a dozen or so issues with the approach took the next few hours. People don’t inherently “get” security— it really needs to be taught.

      • use of a raspberry pi to avoid a costly PLC replacement in a critical environment.

        What's wrong with that? Because of the Wifi, bluetooth, and USB?

        • All of those can be turned off, just like on any other computer.

        • Because the primary goal of the pi was to be built as cheap as possible. It was meant as a tool for students and before the pi you couldn't get a single board computer for under $100. I don't know about you but I wouldn't be replacing mission critical controllers with something that changes parts between runs to keep costs down.

          • The primary goal of most equipment (including what is referred to as a PLC) is to build it as cheaply as possible and sell it for the highest price the market is willing to pay. All "Mission Critical Controllers" (however you want to define those) change parts between runs to keep costs down.

            • Indeed it is. But cheaply as possible in the control systems / PLC world involves still complying with a world of testing and certification requirements including QA on coding, and design. "Mission Critical Controllers" don't change parts on a whim to keep costs down. They change parts every few years on a cost review after a shitton of testing and design verification. Their parts are high cost to begin with precisely because of the reliability requirements placed on them by customers. The lack of this quic

              • Using a Pi or any other "general purpose" solution for something that needs to run for years without intervention is an obvious problem with the risk management and hazop processes in that it permitted such a device to be used in such service. Unless of course that *was* taken into account and deemed irrelevant to the particular use to which the device was being put.

                You are equating things which are not equal to be equal, when they are not.

          • by AmiMoJo ( 196126 )

            The Raspberry Pi Compute Module is suitable for that kind of use. They keep the BOM consistent and they are used in various industrial applications. They are nice modules, fairly low cost and much better supported than most SoM offerings. Plus you can use the full size Pi as a development platform which is very handy.

          • Yes, pi would be a mistake from a reliability standpoint. I run multiple beaglebones and pi's for various things around the house. Over the past 5-ish years, I've had 3 pi's fail and so far no beaglebones.Ironically, the beaglebones are outside and the pi's are all inside. Even with that, not sure I'd stick a beaglebone in a life/death use. For sure not a pi. Everything breaks, although my apt amplifier of 35 years finally needed a new cap and for good measure I recap'ed my 40 year old hafler amp as well si
            • My experience with the Pi as well, they are all nifty and you can use them to do all sorts of things, but they are not exactly reliable.
              We used them to replace one expensive PC with three cheap Pi's at one company I worked, we used a LOT of them, bought them in bulk, at one point our entire countries stock in fact. An SD would fail at least once a week, a Pi once a month across ALL the branches in the company. The software was changed to cater for it, each branch had spares to replace where needed, suspe
        • Re:Air gap works (Score:5, Insightful)

          by gweihir ( 88907 ) on Sunday November 17, 2019 @12:34AM (#59421980)

          These things are _not_ reliable. And they have complex software on them that can behave in unexpected ways. PLCs come with extensively tested and assured reliability stats and reliability assurances far beyond "it does not break". A Raspberry Pi hobbyist device comes with "it will work for a few years if you are lucky and it may randomly make errors". A Raspberry Pi does not even have ECC memory or a reliable MCU on it and its function is certainly not fully tested. It is "cheapest possible".

          • So? How do you know that these risks were not assessed as part of the commissioning process? Mitigation of the "it will only last a couple of years" is simple. Buy ten of them and pre-load them with the appropriate software and put them on the shelf. It is still cheaper than buying a PLC especially for non-control use. Even I would have objections to using a Pi based device "on-control", but "critical environment" is in the eye of the risk assessor and not an external observer who is likely not complet

            • Re:Air gap works (Score:4, Insightful)

              by gweihir ( 88907 ) on Sunday November 17, 2019 @01:24AM (#59422060)

              If you fully test a Raspberry Pi, you end up at a price-tag higher than a PLC. You have to create the whole testing process, the equipment, etc. Basically you need to design a PLC based on it. Sure, if there is no secondary damage when it starts to behave in an arbitrary way (which a PLC will not do), you can do it. It is still probably more expensive overall though. And "buy 10"? Have you overlooked that you also need to archive the whole process, all software and system- images and that there are components on a RPi that do have limited shelf-life?

              I do get that PLCs have inflated prices. But replacing it with a hobbyist component is exactly the mind-set that later on causes catastrophes.

              • You are wrong. Improper and incomplete risk assessment and hazop procedures "later on causes catastrophes". Deploying something using a Pi where proper risk assessments and hazops have been performed does not "later on causes catastrophes".

                • by gweihir ( 88907 )

                  And the other points I have made, you just gloss over? You are a hack.

                  The point is that if you follow proper procedures, there is no place where an RPi will give you an advantage over a PLC, except in functions that basically do not matter and a PLC should probably not have been used in the first place.

                • The fact you use the word HAZOP shows you don't actually know what you're doing. HAZOPs are for process. CHAZOP and FMEA is for control system. The process of conducting a detailed CHAZOP and FMEA is more expensive than a small PLC, not to mention that these two processes will straight away find the Raspberry Pi not suitable for anything mission critical at all.

                  You're saving pennies in the most dangerous of places.

                  • by gweihir ( 88907 )

                    Thanks. It seems it becomes pretty clear why ICS security is such a mess: Incompetence of actors in that space.

            • Mitigation of the "it will only last a couple of years" is simple.

              If you propose going through the process of installing a mission critical system which will only last a few years as part of the design you will be laughed off whatever project you are on. In any case it's clear you don't actually work in this field. If you did you'd realise control systems don't cost much at all. Not compared to the engineering hours put into design and verification by the purchaser.

              • by gweihir ( 88907 )

                Indeed. The main cost in such things is engineering hours. That is if they are done right. Even a simple risk assessment for an RPi will already be more expensive than a PLC where you can just look at the datasheet to find what assurances you actually have. Also, that PLC will have long-term availability and after it goes out of availability, there will usually be a drop-in replacement. That is worth a lot.

              • There you go, assuming "mission critical".

      • And what pray tell are those issues?

        A Raspberry Pi can certainly be on the same reliability scale as a dedicated PLC, and can certainly be packaged to meet whatever environmental requirements are required. It is more versatile and programmable than most PLCs and can be more trivially and completely made safe and secure.

        • by gweihir ( 88907 )

          No. Seriously not. Even at the very low end of reliability, a PLC is in a whole different league.

        • gweihir has gone through a number of the concerns I had, but the focus at the time was primarily that they connected it to the secure network without validating or auditing software, without disabling wireless, without a patch management plan, without documentation, with insufficient functional testing, and without disabling included software.

          While we have recommended and used low cost single board computers in a pinch, using it as a direct PLC replacement for essentially ladder logic PIDs is a mistake. (Th

        • A Raspberry Pi can certainly be on the same reliability scale as a dedicated PLC

          You're an idiot who has never looked at a PLC, and I really mean looked. As in simply from the outside, not even having to check part numbers or design of equipment just looking from a distance alone will show you why one will fail in a couple of years, and the other is designed to last 15+.

          Hint for the ignorant: Conformal coating.

          After you learn what that is, maybe you should start looking at the hardware from closer than 1m away and you'll learn a whole lot of new reasons why your comment is incredibly st

      • by gweihir ( 88907 )

        An utterly unreliable Raspberry Crap as replacement for a critical component? The mind boggles.

        • There was no claim that the Pi was replacing a PLC in a "critical-component", merely that it was in a "critical-environment". These are two entirely different things. For example, an ambient temperature sensor on the scaffold around a tower may be a "critical-environment", however it is not a "critical-component" nor a "critical function". Using a Pi to relay leakage information for a leak detection system as an L1 Alarm is a "critical-component" of a "critical-safety-system" in a "critical environment".

    • The agenda is to sell snake-oil. Lots of snake-oil.

    • by gweihir ( 88907 )

      Air gap works if you implement it correctly.

      And that is the kicker: You need people with a clue. These tend to be more expensive and have less tolerance for abusive working conditions. Hence that industry standard is to use cheapest possible or cheaper. Why do you think there is so incredibly much bad software out there?

      Also, there is nothing wrong with using USB for data transfer. Using, say, serial connections, would not make things any better. Even punch cards or punch tape would be subject to the same attacks. The attacks so far have basically al

    • Air gap works if you implement it correctly. If you implement it poorly, then it's still better than any other security measure implemented poorly. To begin with, using USB to transfer files is a mistake. There are so many other options that work better, and one of them should be used. In fact, no air-gap exploit would have succeeded to date, if it weren't for USB.

      Sealing the USB slots with epoxy and disabling DVD drives helps maintaining an air gap.Of course, nothing is foolproof as fools can be very ingenious.

    • Air gap works if you implement it correctly.

      I would argue air gaps makes people complacent. In general I see more companies take security seriously when they *don't* have air-gapped networks. Mind you give someone rope and they will use it to hang themselves. I've also seen companies fuck up security completely.

      Charging phones, childsplay. I know someone who plugged a 3G modem into their operator station on nightshift back in the day (not a nuclear reactor, but a major hazard facility none the less) and used an engineer's password to fire up a browse

  • So what did we learn?

    While reactor operations at Kudankulam were reportedly unaffected, this incident should serve as yet another wake-up call...

    TFS does not tell us what we learned, other than that the security was fine. Air gapping worked, and only the less secure business network was impacted.

    The entire rest of the summary is just FUD.

  • Georgia Tech is one of the top three schools for cybersecurity*. They've recently started a a masters degree program in cybersecurity for power plants and the electric grid. Pretty soon they'll be graduating 100-200 people with masters degrees in plant security every semester. It will be interesting to see what happens when all of those people go out to get jobs in the sector.

    * Aka information security. The government calls it cybersecurity, sorry if you don't like the term.

    • It will be an unmitigated disaster of checklist idiots playing checklist while having no understanding of the underlying concepts. They will "believe" whatever they are told and thus will do stupid things that will lead to disaster. In all likelihood only 2 of the 100-200 people with Masters Degrees will understand that everything is a lie, especially when it comes from the lips of someone who wants you to buy/use their product.

      • Clearly you've never completed graduate level courses at a top university. There's nothing checklist idiot about this work.

        The people making checklists will hopefully look at the work we're doing at OWASP, ISC2, and other organizations. Maybe they'll even cite out recommendations. The work at OWASP and ISC cites the research we're doing at Carnegie Mellon and Georgia Tech.

        I've been working full time in the infosec field for 20 years, programming infosec systems and teaching security to programmers. Gradua

        • Just to give you a feel for it, I just completed a course which has these projects as requirements:

          Break Diffie-Hellman (TLS/ssl) in two different ways.
          My exploit would allow me to listen to your VPN traffic, for example, on many VPN endpoints.

          Bypass the typical protections against cross site scripting, cross site request forgery, and SQL injection in order to exploit a site in three different ways - even though the programmer included protections against these attacks. My exploits would allow me to wire

  • If folks are amused by the security show of TSA and other agencies in US...wait till they see what happens in countries like India.

    Here is an example: boarding an international flight - especially Air India from an airport like Bombay...you have to clear security three times by three different agencies *after* you get your boarding pass. What they are trying to do is mysterious.

    Indian bureaucracy is Kafka on steroids. At a nuclear plant they will have gun toting soldiers guarding every entrance and mu
    • This is a valid 3G defense posture (3G means Guards, Guns, and Gates) to enforce a physical defense perimeter.

      That those operating the entertainment network use GMail/Hotmail/Office232/WhatsApp is why those people are not permitted to touch actual Control System networks -- in the grand scheme of things it does not really matter if the entertainment systems go down, it is merely a mild inconvenience that, like a cold or diarrhea, will usually pass in a few days or weeks or months, and nothing of import will

  • It will get fixed one way or the other, have faith in human nature.

It's great to be smart 'cause then you know stuff.

Working...