Programming both PCs and PLCs sometimes gets me thinking about programming from a higher level. I’ve written a lengthy answer over on StackOverflow about the differences between PC and PLC programming. What I haven’t talked about before is how they are the same.
First, let me define what I mean by PC and PLC programming. By PC programming, I’m generally referring to imperative programming. There are actually two popular PC programming paradigms: imperative and declarative, and the paradigm with new-found popularity, functional, is actually a subset of declarative programming.
How PC and PLC programming are NOT the same
Most PLC programming falls into the declarative category, and most PC programming falls into the imperative category. For examples:
PC:
- Visual Basic, C/C++, C#: imperative
- Lisp, F#: functional
- HTML, XAML: declarative
PLC:
- Ladder logic: declarative
- Function block diagram: functional
- Structured text: imperative
So normally when we talk about the differences between PC and PLC programming, we’re talking about the differences between imperative and declarative programming, but there’s obviously overlap on both sides of the fence.
The major difference, however, is audience. In North America at least, we write PC programs with the expectation that other programmers will have to read, understand and make changes to them, but we write PLC programs with the expectation that people in the maintenance department will be expected to go online with and troubleshoot our programs. Just think about how odd that would be in the PC world: when a word processor crashes, nobody whips out their debugger, figures out what caused the program to crash, makes a fix and continues writing their letter. Primarily this is because the source code doesn’t come with the word processor, but it’s also because the programming language can only be understood by programmers.
How PC and PLC programming ARE the same
When you look at what makes a PC program good or bad, on a high level it’s the same thing that makes a PLC program good or bad: readability. Now as I’ve pointed out, the people who have to read the program is different in each case, but really readability is the fundamental measure by which experienced programmers rate programs.
On the PC side ,the name of the game with readability is modularity. You want to divide up your program into parts, and you want to make those individual parts as self-contained as possible. You want to minimize the interaction to these parts as much as possible. That makes it easier to reason about the program because you’re abstracting away the underlying complexity on each piece and leaving a less complex interface that you can interact with. The entire domain of object oriented programming is an extension of the concept of modularity.
On the PLC side, readability is equivalent to being able to troubleshoot the machine when it’s down. Experienced PLC programmers ask themselves, “if this machine stopped unexpectedly and I had to figure out why it stopped, what would I do? How can I make it easier for someone following that process to figure out what’s wrong with the machine?”
It turns out that most people troubleshooting a machine follow a similar procedure: you start at the outputs and you work your way backwards. You generally have a good idea what the machine is supposed to do next (e.g. move slide A to position B). You can look at the print set, or even the valve itself and figure out what output should be turning on. You look at the indicator on the output card and it’s not on, so the logic isn’t telling it to turn on. You crack open the laptop, and you find that output. You’re looking for one thing: the COIL.
Notice the one big mistake you could make if you’re writing a program: you could use a whole bunch of set and reset (or latch and unlatch) instructions to drive your outputs. Based on my description, you can easily see why that would make the program less readable: which set instruction is the one that’s supposed to be turning on the output right now? If there’s only one, it’s easy, but if there are 10, you’re already lost.
Let’s assume you do find the coil that drives this output. Your next step is to follow the logic back through the rungs, right clicking on the conditions that aren’t satisfied and cross referencing until you understand what the machine is waiting for. What are some obvious mistakes you can make that would hinder this process?
- Using an integer (or sequencer – yuck!) to store your automatic process step number rather than using individual coils for each step
- Using set/reset or latch/unlatch instructions more than once on each bit
- Using really long tag names so readers have to scroll left/right or up/down more than necessary to read one rung
- Calling subroutines more than once per scan so you can’t see the state of the logic in the subroutine (newer controllers have function blocks where you can drill down into individual instances, which is nice)
- Using For Loops – same reason
- Having logic that is conditionally scanned – particularly in controllers where it isn’t obvious if the logic you’re looking at is scanned or not
- Mapping your inputs or outputs by block copying them to or from a user defined type, word or array (Don’t make the reader start counting bits! The line is down!)
Once you start thinking from the point of view of someone troubleshooting the machine, your perspective on good vs. bad programming really changes. You realize that techniques that seem to save you time while you’re programming end up costing the company hours of lost production time while maintenance picks their way through your cryptic logic.
Next time you’re writing your ladder logic, think of the poor maintenance guy who has to figure out what’s wrong, and try to make his life a little less miserable.
Can I propose that the Sequencer instruction be removed from the next generation of controllers? Trying to change the functionality of an undocumented sequencer is a job for Mi6… Great post Scott, well structured and documented PLC code that is easy to follow, may be a little extra work up front, but your customer will be thanking you in the long run – and so will your colleagues who go in to help out years later!
Oh cmon, a great engineer I know once told me that since it was hard to write, it should be hard to read!
Great post Scott, will be sharing this with many people. As a hardware vendor it’s our goal to make our hardware as easy as possible to work with, and that includes providing examples that are written the way we think our end users should be writing their programs.
Good point Nick. Actually, I’ve been wondering if you can create a programming environment that makes it easy to write readable code, and hard to write unreadable code… What do you think?
Thank you, Scott for this article.
I would like to go a little bit further, take the subroutines out of the equation. The subroutines are good for the programmer but are not for the electrician.
How much a subroutine cost? (Nothing and it is good for the integrators) how many problems can you get with a subroutine (too many and can take a lot of time trying to find a problem)
I have seen a lot of this situation with not very good trained electricians
I am trying to establish an efficient way of learning how to troubleshoot the plc software ???
RSLogix 5000 AllenB
without an actual plc connected re: rem/run/prog
if plc output simulations are required how can these be achieved to replicate errors so that the software
can be debugged effectively
RSWho RSLinx RSLogix 6000
Hi Scott, I am a PC programmer trying to learn PLC programming. I find your website a great place for learning.
I agree with most of what you said about readability, but I don’t quite understand why mapping inputs or outputs by block copying them to or from a user defined type is bad.
I don’t know if RSLogix has changed since you written the article in 2010, but in my case I can get meaningful tag names into an organized tree structure. I find it easier to use in programs and easier to browse in HMI. I include the bit number in the description so if anyone need to count bits it’s won’t be too difficult. But I can’t think of a scenario where somebody needs to actually count the bit when all the named tags are available.
Hardware changes which results in changes of the process data structure is easier to implement if the data is copied to or from a user defined type. You just need to insert an element in the the user defined structure instead of remapping all of alias tags which is time consuming and error prone.