Author Archives: Scott Whitlock

About Scott Whitlock

I'm Scott Whitlock, an "automation enthusiast". By day I'm a PLC and .NET programmer at ETBO Tool & Die Inc., a manufacturer.

Designing Database Tables for Automation People

It may seem like I’ve forgotten about this blog lately, but that’s not the case. The truth is last week I was on vacation, and before and after that I’ve been working on a project tangentially related to home automation, which I’ll probably be posting lots about in a couple of weeks.

However, today I wanted to touch on a topic that many of you will be familiar with: database design. When we talk about database design, we mean a database schema or, more generally, and entity relationship diagram (ERD).

If you do any kind of data logging, or you’re using a database as the data-store for your configuration data, you’ll have to do some kind of database design. Both of these cases call for a “normalized” design. In fact, de-normalized designs are typically only used for heavy-duty data-mining applications, so they’re pretty rare. The advantage of a normalized database is that it follows the “once and only once” (OAOO) software development principle, that says there should be one, and only one, definitive source for any particular fact. So, for instance, don’t store the operator’s name all over the place; rather, store the operator’s name in a table called Operator, include an OperatorId column that’s assigned once when the operator’s row is created but never changes, and then use the OperatorId as a foreign key in your other tables. This gives you several advantages: less database storage (an Id is typically shorter than a name), a single place to change the name (typos are always common and people change their names) and if you do have to change it, you only have to lock one database row to do the edit during the database transaction, instead of every database row that uses this person’s name.

That’s pretty standard stuff, but I want to take a slight tangent. By default, don’t store data you can calculate from other data. This is actually for the same reason. For instance, you wouldn’t store a person’s age, you’d store their birth date. That’s because the age changes all the time. I’m not saying you’d never store a calculated value, but doing so is an optimization, and “premature optimization is the root of all evil.”

Let me give you a real-life example. Lets say you wanted to record the production throughput of an automobile assembly line. Let’s assume you’re already storing the VIN numbers of each vehicle, along with some other data (various part serial numbers, etc.). I’ve seen implementations where someone’s added a new table called LineThroughput, with one row per time period, and a counter in each row (in fairness, I’ve done it too). Every time a vehicle comes off the line, the application finds the applicable row and increments the counter (or adds a new one as required). PLC programmers are particularly likely to do this because we’re used to having limited memory in the PLC, and PLCs come with built-in counter instructions that make this really easy. However, this is a subtle form of denormalization. The database already knows how many vehicles were made, because it has a record for each VIN. All you have to do is make sure that it has a datetime column for when the vehicle rolled off the line. A simple query will give you the total number of vehicles in any time period. If you follow the route of adding the LineThroughput table, you risk having a numerical discrepancy (maybe the database isn’t available when you go to increment the counter, for instance).

Just storing the datetime field has one more advantage: the database is more “immutable”. If data is only written, errors are less likely. If you do want to create a summary table later (for performance reasons because you query it a lot), then you can create it when the time period is over, and once you’ve written the record, you’ll never have to update the row. Again, this is better because the row is “immutable”. The data is supposed to be a historical record. Pretend it’s written in pen, not pencil. (You might be horrified to know that some electronic voting machines seem to use the LineThroughput table method to record your votes, which makes them extremely susceptible to vote-tampering.)

I hope that’s enough information to make my points: normalize your database, don’t record redundant information, or information you can calculate, and avoid situations where you have to update rows repeatedly, particularly if you’re doing data logging.

RAB Telecom Canada Review

I was recently in the market for some laser-printable wire labels and I stumbled across RAB Telecom Canada. The price was right for the smaller quantity I was after, so I decided to give them a shot. I was a bit confused by some of the wording on the website, so I contacted them.

The next day I had not only an email answering my question, but a personal phone call from the president, Richard, apologizing for the confusion, promising to have the site updated promptly, and he personally had the shipment in hand and ready to go. He added, “I will pay the shipping handling and for the labels in question. Mr. Whitlock I do this because I value your possible future business.”

As promised, I recently received the labels in great condition. It’s honestly some of the best customer service I’ve ever received from any vendor. It was so out-of-the-ordinary and unexpected that it shocked me. If you happen to be reading this because you typed RAB Telecom Canada into Google and arrived here, then let me assure you they exceeded my expectations.

The Role of the Engineer

John’s comment on a previous blog post got me thinking about the role of engineers in our society.

Primarily we optimize. Don’t get me wrong, there are lots of engineers that are innovators, inventors, and entrepreneurs, but I believe we’re stepping out of the core role of an engineer when we do that. Our core competency is to get more output from less input.

We’re flexible in our optimization. If we need to get a product to market fast, we can reduce the development time at the expense of resources or quality. Likewise, we relentlessly try to drive down the cost, as profit is the motivating factor behind the businesses that employ us. We do all of this within physical and legal constraints, like the laws of physics and building codes.

That’s why I think it’s funny when anyone proposes that we can “engineer” our way out of some environmental crisis. Engineers will only solve environmental problems if the environmental parameters are included in the equation. If you raise the price of oil, we’ll redesign our processes to use less oil. If you put environmental regulations in place, we’ll redesign our products to meet that criteria.

When I say that something like climate change is a problem for lawyers and politicians, not engineers, this is what I mean. It’s not a technological challenge. It’s a societal challenge. If you really want to consume less fossil fuels then replace income taxes with a fossil fuel tax and see what happens: we’ll automatically shift to other sources of energy, for we are the instruments of that policy.

As engineers, we do have some say. In our role as citizens we vote. We are involved in the creation of new building codes and international standards. Even so, it’s political will that makes change. If you’re waiting for engineering to solve these big issues, don’t. Engineering will follow policy.

Functional Programming in Ladder Logic

There’s a lot of stuff that falls under the term “functional programming,” but I’m just going to focus on the “functional” part right now, meaning when you define the value of something as a function of something else.

In ladder logic, we define the values of internal state (internal coils or registers) and outputs. We define these as functions of the inputs and internal state. We call each function a “rung”, and one rung might look like this:

Ladder diagram of Inputs A and B, and Internal State C

There’s something slightly odd going on in that rung though. You might say that we’ve defined C recursively, because C is a function of A, B, and itself. We all know, of course, that the PLC has no problem executing this code, and it executes as you would expect. That’s because the C on the right is not the same as the C on the left. The C on the right is the next state of C and the C on the left is the previous state of C.

Each time we scan, we redefine the value of C. That means C is an infinite time-series of true/false values. Huh?

Ok, imagine an array of true/false (boolean) values called “C”. The lower bound on the array index is zero, but the upper bound is infinite. C[0] is false (the value when we start the program). Then we start scan number 1, and we get to the rung above, and the PLC is really solving for is this:

Ladder logic defining C[1] as a function of A, B, and C[0]

If that were actually true (if it had an infinite array to store each coil’s value), then the ladder logic would be a truly functional programming language. But it’s not. Consider this:

Two ladder logic rungs with inputs A and B, internal coil C, and output D

In all modern PLCs, the first rung overwrites the value of C, so the second rung effectively uses the newly computed value for C when evaluating D. That means D[1] is defined as being equal to C[1] (the current state value of C). Why is this weird? Consider this:

Two previous rungs with the rung order reversed

By reversing the order of the rungs, I’ve changed the definition of D. After the re-ordering, D is now defined as C[0] (the previous state value of C) rather than C[1]. This isn’t a trivial difference. In an older PLC your scan time can be in the hundreds of milliseconds, so the D output can react noticeably slower in this case.

In a truly functional language, the re-ordering either wouldn’t be allowed (you can’t define D, which depends on C, before you define C) or the compiler would be able to determine the dependencies and re-order the evaluation so that C is evaluated before D. It would likely complain if it found a circular dependency between C and D, even though a PLC wouldn’t care about circular dependencies.

There are a few of reasons why PLCs are implemented like this. First, it saves memory. We would have to double our memory requirements if we always wanted to keep the last state and the next state around at the same time. Secondly, it’s easier to understand and troubleshoot. Not only does the PLC avoid keeping around two copies of each coil, but the programmer only has to worry about one value of each coil at any given point in the program. Third, the PLC runtime implementation is much simpler. It can be (and is) compiled to a kind of assembly language that can run efficiently on single threaded CPUs, which were the only CPUs available until recently.

Of course this comes with a trade-off. Imagine, for a moment, if rung-ordering didn’t matter. If you could solve the rungs in any order, that means you could also solve the rungs in parallel. That means if you upgraded to a dual-core CPU, you could instantly cut your scan time in half. Alas, the nature of ladder logic makes it very difficult to execute rungs in parallel.

On the other hand, we can still enforce a functional programming paradigm in our ladder logic programs if we follow these rules:

  • Never define a coil more than once in your program.
  • Don’t use a contact until after the rung where the associated coil has been defined.

That means there should only be one destructive write to any single memory location in your program. (It’s acceptable to use Set/Reset or a group of Move instructions that write to the same memory location as long as they’re on the same or adjacent rungs).

It also means that if coil C is defined on rung 5, then rungs 1 through 4 shouldn’t contain any contacts of coil C. This is the harder rule to follow. If you find you want to reference a coil before it’s defined, ask yourself if your logic couldn’t be re-organized to make it flow better.

Remember, someone trying to solve a problem in a PLC program starts at an output and uses cross references to move back through the program trying to understand it. Cross referencing from a contact to a coil that moves you forward in the program doesn’t require any logical leaps, but cross referencing to a coil later in the program means you need to logically think one scan backwards in time.

Benefits

While ladder logic isn’t a truly functional language, you can write ladder logic programs in the functional programming paradigm. If you do, you’ll find that your outputs react faster, and your programs are easier to understand and troubleshoot.

Technical Difficulties

I’m sorry if you were having issues with this site recently. It turns out what when I upgraded the WordPress software, it became incompatible with one of the “widgets” I had installed (specifically the AddThis bookmarking one). It broke all internal links (when you clicked on any internal link, you just got a blank page). I upgraded the AddThis widget to the newer version and now it all seems to be working again.

Computers… arg!

How to Read Industrial Control System Wiring Diagrams

I write a lot about the PLC side of industrial automation, but it’s also fundamental to have a good foundation in the electrical side of things.

First of all, most modern (North American) industrial control system wiring diagrams have a relatively common numbering scheme, and once you understand the scheme, it makes it fairly easy to navigate the wiring diagram (commonly called a “print set”).

Let’s start with the page and line numbering. Most multi-page wiring diagrams use a two digit page number (page 1 is “01”). In the rare, but possible, event that you end up with over 99 pages, some diagrams will just add 3 digit page numbers (starting at “100”), but if there was any forethought, many designers will divide their wiring diagrams into sections, giving each section a letter (let’s say “A” for the header material, “B” for power distribution, “C” for safety circuits, etc.). Within each section, you can re-start the page numbering at “01”. This has the added bonus of letting you insert more pages into one section without messing up the page numbering.

Within a page, you’ll typically see line numbers down the left side (and frequently continuing down the middle if you don’t need the whole width of the page for your circuits). These numbers will start with the two digit page number, followed by a two digit line number. Typically these start at zero, and increment by twos:

  • 1000
  • 1002
  • 1004
  • 1006
  • … and so on

Now, devices (like pushbuttons, power supplies, etc.) usually have a device ID based on the four digit line number where they are shown in the wiring diagram, possibly with a prefix or suffix noting the device type. So, if you have a pushbutton on line 2040, the device ID might be PB2040 or 2040PB. The device ID should also be attached to the device itself, normally with an indelible etched label (lamacoid). Therefore, if you find a device in the field, you should be able to find its location in the wiring diagram. (Finding the wiring diagram, of course, is often the more difficult task.)

Wires are numbered similarly. The wire number is typically based on the four digit line number where the wire starts, plus one extra digit or letter in case you have more than one wire number starting on the same line. So, the wire numbers for 2 wires starting on line 1004 might be 10041 and 10042.

It’s typical for wires to connect to devices that are on other pages (it’s extremely common, in fact). In that case, you’ll see off-page connectors. The shape of these vary based on whose standard was used for the wiring diagram, but they’re typically rectangles or hexagons. In either case, inside the shape will be the four digit line reference number where the wire continues. The other end of the off-page connector (on the other page) also has an off-page connector in the opposite direction. Note that you frequently see connectors from one place on a page to another place on the same page, if it happens to improve readability.

That’s all you really need to know to find devices and follow wires in a wiring diagram. Now, to understand the components in an industrial control system, that’s going to take longer than a blog post. For a great introduction, I recommend the book Industrial Motor Control by Stephen Herman. Google books has a great preview if you want to check it out.


What Motivates Makers

A couple weeks ago I wrote a post about budgets, and Ken from over at Robot Shift replied:

My experience when compensation is set up to be win-win it’s a good thing. If someone’s working hard, and doing the right things often their impact far exceeds that of just their hourly contribution. It impacts teams, groups and profitability. If the organization profits from this impact, the employee should as well.

On the surface I think that’s true – we all appreciate profit sharing, but I think Ken’s talking about performance-metric-based-pay. I don’t agree with tying creative work to performance pay. The inner workings of the human mind continue to astound us. An article in New Scientist says:

…it may come as a shock to many to learn that a large and growing body of evidence suggests that in many circumstances, paying for results can actually make people perform badly, and that the more you pay, the worse they perform.

“In many circumstances” – I’m going to talk about one such circumstance shortly. Of course for more surprising information there’s the classic Dan Pink on the surprising science of motivation, and Clay Shirky’s talk on Human Motivation.

There’s a big difference between intrinsic motivation and extrinsic motivation. If you have to choose, intrinsic motivation is more powerful. It costs less, and it’s a stronger motive force. I believe that’s due to the phenomenon of “flow“. If you work in any type of creative field, you know the power of flow – that feeling you get when you’re 100% immersed in your work, the outside world seems to drift away, you lose track of time, but you’re incredibly productive. Without distractions you can focus your full attention on solving the problem at hand. Once you’ve experienced this a few times, it’s like a drug.

For someone like me, assuming I make enough money to pay the bills, I would trade any additional money to spend more time in “flow”. Yes, I’m Scott Whitlock and I’m addicted to flow. (I’m not the only one.) If you have one of those lucky days where you spend a full 8 hours without distractions, wrestling with big problems, you’ll come home mentally exhausted but exhilarated. At night you truly enjoy your family time, not because your job sucks and family time is an escape, but because you’re content. The next morning you’re excited on your drive to work. Your life has balance.

Paul Graham wrote an excellent essay about the Maker’s Schedule, Manager’s Schedule. The Maker blocks out their time in 4 hour chunks because they need to get into “flow” to do their best work. Managers, on the other hand, schedule their days in 30 or 60 minute chunks.

Do you see the problem? If a Manager never experiences flow then they can’t understand what motivates their Makers. That’s why management keeps pushing incentive pay: a lot of Managers are extrinsically motivated and they can’t get inside the head of someone who isn’t.

In case you happen to be a Manager, I feel it’s my duty to help you understand. It’s like this: if you’re addicted to flow, then being a Maker is an end in itself. If you still don’t get it, please read the story of The Fisherman and the Businessman. Ok, got it? Good.

So, if you’re managing people who are intrinsically motivated, here’s how you should setup their pay:

  • Make sure their salary or wage is competitive. If it’s not, they’ll feel cheated and resentful.
  • Profit sharing is ok if you don’t tell them how you arrived at the numbers.

Yeah, that’s crazy isn’t it? But it’s true. Do you want to know why? I’ve spent most of my career paid a straight wage without incentive pay. I did get bonuses, but they were just offered to me as, “good work this year, here’s X dollars, we hope you’re happy here, thanks very much.” Under that scheme when someone came to me with a problem, I relished the challenge. I dove into the problem, made sure I fully understood it, found the root cause and fixed it. The result was happy customers and higher quality work.

For a short time I was bullied into accepting performance-based pay. My metrics were “project gross margin” and “percent billable hours vs. target”. Then when someone came to me with a problem unrelated to my current project, my first reaction was “this is a distraction — I’m not being measured on this criteria.” When I paused to help a co-worker on another team for half an hour on their project, my boss reminded me that I wasn’t helping our team’s metrics. My demeanor with customers seemed to change. Work that used to seem challenging became extremely stressful. I lost all intrinsic motivation. I was no longer working to help the customer – I was working to screw the customer out of more money. It was as if, by dangling the carrot in front of my nose, I could no longer see the garden.

It’s hard for me to admit that. When you’re intrinsically motivated, you’re proud of it. It makes you feel good to do the work for its own sake. For Makers, doing work for performance pay is a hollow substitute. It demoralizes you.

I guess my point is, if you manage Makers, how do you not know this? It’s your job!

Ladder Logic vs. C#

PC programming and PLC programming are radically different paradigms. I know I’ve talked about this before, but I wanted to explore something that perplexes me… why do so many PC programmers hate ladder logic when they are first introduced to it? Ladder logic programmers don’t seem to have the same reaction when they’re introduced to a language like VB or C.

I mean, PC programmers really look down their noses at ladder logic. Here’s one typical quote:

Relay Ladder Logic is a fairly primitive langauge. Its hard to be as productive. Most PLC programmers don’t use subroutines; its almost as if the PLC world is one that time and software engineering forgot. You can do well by applying simple software engineering methods as a consequence, e.g., define interfaces between blocks of code, even if abstractly.

I’m sorry, but I don’t buy that. Ladder logic and, say C#, are designed for solving problems in two very different domains. In industrial automation, we prefer logic that’s easy to troubleshoot without taking down the system.

In the world of C#, troubleshooting is usually done in an offline environment.

My opinion is that Ladder Logic looks a lot like “polling” and every PC programmer knows that polling is bad, because it’s an inefficient use of processor power. PC programmers prefer event-driven programming, which is how all modern GUI frameworks react to user-initiated input. They want to see something that says, “when input A turns on, turn on output B”. If you’re familiar with control systems, your first reaction to that statement is, “sure, but what if B depends on inputs C, D, and E as well”? You’re right – it doesn’t scale, and that’s the first mistake most people make when starting with event-driven programming: they put all their logic in the event handlers (yeah, I did that too).

Still, there are lots of situations where ladder logic is so much more concise than say, C#, at implementing the same functionality, I just don’t buy all the hate directed at ladder logic. I decided to describe it with an example. Take this relatively simple ladder logic rung:

What would it take to implement the same logic in C#? You could say all you really need to write is D = ((A && B) || D) && C; but that’s not exactly true. When you’re writing an object oriented program, you have to follow the SOLID principles. We need to separate our concerns. Any experienced C# programmer will say that we need to encapsulate this logic in a class (let’s call it “DController” – things that contain business logic in C# applications are frequently called Controller or Manager). We also have to make sure that DController only depends on abstract interfaces. In this case, the logic depends on access to three inputs and one output. I’ve gone ahead and defined those interfaces:

    public interface IDiscreteInput
    {
        bool GetValue();
        event EventHandler InputChanged;
    }

    public interface IDiscreteOutput
    {
        void SetValue(bool value);
    }

Simple enough. Our controller needs to be able to get the value of an input, and be notified when any input changes. It needs to be able to change the value of the output.

In order to follow the D in the SOLID principles, we have to inject the dependencies into the DController class, so it has to look something like this:

    internal class DController
    {
        public DController(IDiscreteInput inputA, 
            IDiscreteInput inputB, IDiscreteInput inputC, 
            IDiscreteOutput outputD)
        {
        }
    }

That’s a nice little stub of a class. Now, as an experienced C# developer, I follow test-driven development, or TDD. Before I can write any actual logic, I have to write a test that fails. I break open my unit test suite, and write my first test:

        [TestMethod]
        public void Writes_initial_state_of_false_to_outputD_when_initial_inputs_are_all_false()
        {
            var mockInput = MockRepository.GenerateStub<IDiscreteInput>();
            mockInput.Expect(i => i.GetValue()).Return(false);
            var mockOutput = MockRepository.GenerateStrictMock<IDiscreteOutput>();
            mockOutput.Expect(o => o.SetValue(false));

            var test = new DController(mockInput, mockInput, mockInput, mockOutput);

            mockOutput.VerifyAllExpectations();
        }

Ok, so what’s going on here? First, I’m using a mocking framework called Rhino Mocks to generate “stub” and “mock” objects that implement the two dependency interfaces I defined earlier. This first test just checks that the first thing my class does when it starts up is to write a value to output D (in this case, false, because all the inputs are false). When I run my test it fails, because my DController class doesn’t actually call the SetValue method on my output object. That’s easy enough to remedy:

    internal class DController
    {
        public DController(IDiscreteInput inputA, IDiscreteInput inputB, 
            IDiscreteInput inputC, IDiscreteOutput outputD)
        {
            if (outputD == null) throw new ArgumentOutOfRangeException("outputD");
            outputD.SetValue(false);
        }
    }

That’s the simplest logic I can write to make the test pass. I always set the value of the output to false when I start up. Since I’m calling a method on a dependency, I also have to include a guard clause in there to check for null, or else my tools like ReSharper might start complaining at me.

Now that my tests pass, I need to add some more tests. My second test validates when my output should turn on (only when all three inputs are on). In order to write this test, I had to write a helper class called MockDiscreteInputPatternGenerator. I won’t go into the details of that class, but I’ll just say it’s over 100 lines long, just so that I can write a reasonably fluent test:

        [TestMethod]
        public void Inputs_A_B_C_must_all_be_true_for_D_to_turn_on()
        {
            MockDiscreteInput inputA;
            MockDiscreteInput inputB;
            MockDiscreteInput inputC;
            MockDiscreteOutput outputD;

            var tester = new MockDiscreteInputPatternGenerator()
                .InitialCondition(out inputA, false)
                .InitialCondition(out inputB, false)
                .InitialCondition(out inputC, false)
                .CreateSimulatedOutput(out outputD)
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputA).TurnsOn()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputB).TurnsOn()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputA).TurnsOff()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputC).TurnsOn()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputB).TurnsOff()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputA).TurnsOn()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputB).TurnsOn()
                .AssertThat(outputD).ShouldBe(true); // finally turns on

            var test = new DController(inputA, inputB, inputC, outputD);

            tester.Execute();
        }

What this does is cycle through all the combinations of inputs that don’t cause the output to turn on, and then I finally turn them all on, and verify that it did turn on in that last case.

I’ll spare you the other two tests. One check that the output initializes to on when all the inputs are on initially, and the last test checks the conditions that turn the output off (only C turning off, with A and B having no effect). In order to get all of these tests to pass, here’s my final version of the DController class:

    internal class DController
    {
        private readonly IDiscreteInput inputA;
        private readonly IDiscreteInput inputB;
        private readonly IDiscreteInput inputC;
        private readonly IDiscreteOutput outputD;

        private bool D; // holds last state of output D

        public DController(IDiscreteInput inputA, IDiscreteInput inputB, 
            IDiscreteInput inputC, IDiscreteOutput outputD)
        {
            if (inputA == null) throw new ArgumentOutOfRangeException("inputA");
            if (inputB == null) throw new ArgumentOutOfRangeException("inputB");
            if (inputC == null) throw new ArgumentOutOfRangeException("inputC");
            if (outputD == null) throw new ArgumentOutOfRangeException("outputD");

            this.inputA = inputA;
            this.inputB = inputB;
            this.inputC = inputC;
            this.outputD = outputD;

            inputA.InputChanged += new EventHandler((s, e) => setOutputDValue());
            inputB.InputChanged += new EventHandler((s, e) => setOutputDValue());
            inputC.InputChanged += new EventHandler((s, e) => setOutputDValue());

            setOutputDValue();
        }

        private void setOutputDValue()
        {
            bool A = inputA.GetValue();
            bool B = inputB.GetValue();
            bool C = inputC.GetValue();

            bool newValue = ((A && B) || D) && C;
            outputD.SetValue(newValue);
            D = newValue;
        }
    }

So if you’re just counting the DController class itself, that’s approaching 40 lines of code, and the only really important line is this:

    bool newValue = ((A && B) || D) && C;

It’s true that as you wrote more logic, you’d refactor more and more repetitive code out of the Controller classes, but ultimately most of the overhead never really goes away. The best you’re going to do is develop some kind of domain specific language which might look like this:

    var dController = new OutputControllerFor(outputD)
        .WithInputs(inputA, inputB, inputC)
        .DefinedAs((A, B, C, D) => ((A && B) || D) && C);

…or maybe…

    var dController = new OutputControllerFor(outputD)
        .WithInputs(inputA, inputB, inputC)
        .TurnsOnWhen((A, B, C) => A && B && C)
        .StaysOnWhile((A, B, C) => C);

…and how is that any better than the original ladder logic? That’s not even getting into the fact that you wouldn’t be able to use breakpoints in C# when doing online troubleshooting. This code would be a real pain to troubleshoot if the sensor connected to inputA was becoming flaky. With ladder logic, you can just glance at it and see the current values of A, B, C, and D.

Testing: the C# code is complex enough that it needs tests to prove that it works right, but the ladder logic is so simple, so declarative, that it’s obvious to any Controls Engineer or Electrician exactly what it does: turn on when A, B, and C are all on, and then stay on until C turns off. It doesn’t need a test!

Time-wise: it took me about a minute to get the ladder editor open and write that ladder logic, but about an hour to put together this C# example in Visual Studio.

So, I think when someone gets a hate on for ladder logic, it just has to be fear. Ladder logic is a great tool to have in your toolbox. Certainly don’t use ladder logic to write an ERP system, but do use it for discrete control.