Category Archives: Industrial Automation

“Best Practices,” Indeed

I’ve just been reading Ken McLaughlin’s recent post Top Ten Signs an Integrator is the Real Deal #7: Best Practices and Standards and I have to say, my initial reaction is one of skepticism. I think Ken’s thinking is a little too narrow on this one. Let me explain…

This isn’t the first time I’ve considered the “problem of standards” on this blog. In an earlier post, Standards for the Sake of Standards, I explained how most corporate standards eventually end up being out-of-date and absurd, mostly because nobody making the standard ever things to write down Why the standard exists, which would allow future policy-makers to understand the reasons and change the standard when it no longer applied. Instead, it becomes gospel.

However, that isn’t to say you could run a large organization without best practices and standards. That’s the point isn’t it? In order to become large, you need built-in efficiency, and you do that at the expense of innovation. Big companies don’t innovate (in fact the only notable exception is Apple, and the rebuttal is always, “fine, so give one example other than Apple”). Almost all innovation happens in small companies, by a tightly knit group of superstars where the chains have been removed. Best Practices are, in fact, put in place to clamp down on innovation because innovation is risky, and investors hate risk. It’s better to make lots of average product for average people than exceptional products for a few people (hence McDonald’s). Paul Graham, as usual, has something insightful to add to this:

Within large organizations, the phrase used to describe this approach is “industry best practice.” Its purpose is to shield the pointy-haired boss from responsibility: if he chooses something that is “industry best practice,” and the company loses, he can’t be blamed. He didn’t choose, the industry did.

I believe this term was originally used to describe accounting methods and so on. What it means, roughly, is don’t do anything weird. And in accounting that’s probably a good idea. The terms “cutting-edge” and “accounting” do not sound good together. But when you import this criterion into decisions about technology, you start to get the wrong answers.

The reason small companies are innovative is that innovative people can’t stand corporate environments. Imagine if you were an inspired chef… could you stand working at McDonald’s? Could McDonald’s even stand to employ you? You’d be too much trouble! You’d have to work in that nice one-off restaurant called “Maison d’here” where the manager puts up with your off-beat attitude because ultimately you make good food, and you keep their small but devoted clientèle coming back. But you can’t be franchised. The manager of the restaurant can’t scale you up without making what you do into a procedure.

So back to Ken’s topic… if you are choosing a systems integrator, you need to decide if you’re buying an accounting system (i.e. something that’s generic to all companies, and not a competitive advantage), or something that is a competitive advantage to you. When you’re automating your core business processes, you must build competitive advantage into it, and it must be innovative. If that’s the case, stay away from larger integrators with miles and miles of red tape and bureaucracy. Go for the “boutique” integrator (somewhere in the 7 to 25 person sized company, under $10 million per year in revenue) that can show you good references. You’re looking for a small group of passionate people. Buzzwords are a warning sign; small companies don’t have time for corporate-speak.

I’m not saying you should use the two guys in their garage. These guys are ok for your basic maintenance tasks, small changes, and local support, but you do want someone who has been around for a few years and has at least a couple of backup engineers they can pull in if there’s a problem. Make sure they have a server, with backups, and all that.

On the other hand, if what you’re automating is very large and very standard, that’s when you want to go with Ken’s approach. If you need to integrate a welding line, paint line, or whatever, there’s nothing new or innovative in that, so you want to lower the risk. You know all the big integration companies can do this, so go and get three bids, and choose the one that’s hungriest for the work. Make sure they have standards and best practices. The reduction in risk is worth it if you don’t need the innovative solution.

You can do a hybrid approach. Identify the parts of your process that could be key competitive advantages if you could find a better way to do it. This is where innovation pays off. Go out and consult with some boutique integrators ahead of time and get them working on those “point solutions”. Then go to the bigger companies to farm out the rest of your automation needs. How’s that for a “best practice”?

The User Interface Makes the Difference, Except in Automation

Start by watching this video about the Aeryon Scout robot (kudos Kareem):

I think what sets this aerial robot apart, as Kareem says, is the intuitive user interface. When I look at the state of automation today, I can see that good user interfaces are typically an after-thought. Custom solutions are sometimes so cobbled together that there isn’t enough bandwidth between one black box and the HMI, or the HMI is just a simple two line text display that ends up saying FAULT 53 (the manual with the list of faults, of course, is stuck inside the door of the panel, and it’s the only thing in the area that isn’t covered in grease because nobody bothers to look at it).

People frequently blame engineers for this mess, which I find a bit silly. Certainly user interfaces are a critical component of any system, but why do you hire an electrical designer to do the electrical design, hire a programmer to write the software, but expect one of these people to magically become a usability expert, which is a field unto itself?

I think there used to be an idea that there was no payback on usability. Certainly if you’re selling something like a VCR, you can only print features on the box (you can’t accurately represent the experience of using it) and people only buy one. However, as items become more social (think iPhone), we’re starting to see great user interfaces create viral marketing for products. I think I first saw this with the TiVo – once you saw what it could do, and how easy it was, you were hooked. Apple’s technology seems to be the same way, and I can see how the Aeryon Scout probably has the same “shock and awe” effect when you demo it.

Where does that leave us with industrial automation interfaces? Automation is always purchased based on a cost-benefit analysis because of the high capital cost. The operators typically don’t participate in the purchasing decision at all. I don’t think effort put into a better user interface is wasted; in fact I’m certain there’s a long term payback. But it’s not a selling feature and it takes more time to do right.

Still, when I programmed a machine recently, it was nice to overhear someone say, “it’s pretty intuitive, isn’t it?” So I guess I’ll keep trying, even if it’s not in my own best interest. Engineers are weird that way.

If you’re interested in making better user interfaces, first I recommend reading The Design of Everyday Things by Donald Norman. I also recommend this video called the least you can do about usability by Steve Krug, author of Don’t Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition.

The Two Kinds of Automation Software

As we all know, there are 10 kinds of people in the world.

For those of you who haven’t read Zen and the Art of Motorcycle Maintenance by Pirsig, he spends at least one chapter at the beginning talking about how we naturally tend to divide things into smaller pieces in an effort to understand them. The novice looks at a motorcycle and sees the visible things, like a seat, handlebars, and wheels, but the expert sees a fuel system, a cooling system, and the suspension. The same thing or system (motorcycle) can be subdivided different ways depending on what we want to do with it.

My tongue-in-cheek title of this post is an acknowledgement of the many ways we can categorize something like Automation Software, but for my purposes today, I’m making two categories: hammers and levels.

A carpenter carries both a hammer and a level, but the two have fundamentally different failure modes. If a hammer stops working, you’ll know it as soon as you try to use it. As long as it hammers in a nail, it doesn’t matter if the hammer is rusty, dirty, scratched or dented, it’s a working hammer. The level, on the other hand, is a measuring instrument. As novices, we assume that it comes from the factory pre-calibrated, and we happily hang our shelf or picture without testing it, but a professional carpenter knows that they have to check their levels for accuracy, or else the level is useless. You could use a level for years, but if one day it stopped being accurate, you probably wouldn’t know. This is a very different situation than the hammer.

Software in general, and automation software in particular, both have similar examples. You never need to “calibrate” the Axis 1 Advanced proximity switch on a machine because if it doesn’t work, the machine won’t make parts (and you’ll know about it instantly, usually via a 2 am phone call). On the other hand, testing data collection logic is surprisingly difficult because the only way to test it is to compare it with a known-good equivalent. Assuming you created this data collection logic to automate away a manual process, the only measuring stick we can check it against is the manual process we’re replacing. Once the system is bought off and we get rid of the paper system, how do you prove that subsequent changes don’t break the data collection system?

It’s tempting to brush off the problem by saying that anyone who makes a subsequent change has to do a full regression test of the system, including the data collection system, but anyone who has worked in a real factory environment knows that this is unlikely to work in practice. Full regression tests are expensive.

In the greater software world, they use automated unit tests. They take the logic being tested and they run it through a series of automated checks to make sure nothing changes. This works well in an environment like PC programming, but is very difficult in practice for PLC programming because (a) you usually need a physical PLC to execute the logic (unless you have some kind of emulator) and (b) the people maintaining the system are likely not familiar with concepts like unit tests, and are likely to undervalue their importance.

This screams for a system-level solution. Take accounting for instance. Double-entry accounting (the use of debits and credits to force every action to be made twice) is deliberately created to help catch manual entry errors. If your debits and credits don’t balance, you know you’ve made a mistake somewhere, and you go back and check your arithmetic.

In the automation world, the solution is to measure every input to the data collection system two ways, analyze and aggregate both separately, and compare the end results. Create a system warning or fault if the results don’t match. For instance, measure the amount of material going into the machine, and measure the amount of material exiting the machine, both as finished product, and scrap. If the input doesn’t match the sum of the outputs over the same time period, you know you have a problem. The system becomes self-checking (a hammer rather than a level).

If you follow this route, you need to take care to avoid some common traps:

  • Don’t re-use logic between the two sides (in fact, try to make them work differently)
  • Try to use different sensors or sensing methods (can we measure the input by speed and duration, and the output by parts and scrap weight?)
  • Record both, so if there is a discrepancy, you can check them against manual measurements

It sounds like more work, but making the system self-checking actually reduces the amount of testing you have to do, so it’s not as bad as you think. Besides, writing code is a lot more fun than testing it. We automate everyone else’s job, why not the boring parts of ours?

Sequential Function Charts in RSLogix 5000

I recently wrote part 11 of the RSLogix 5000 tutorial I’ve been working on, and this part deals with the Sequential Function Chart editor.

RSLogix 5000 - Sequential Function Chart (SFC) Editor

I know there’s a lot of resistance to straying away from the established usage of ladder logic everywhere, but in this part of the tutorial I present a really simple way to use sequential function charts to express auto mode sequences in your RSLogix 5000 program that’s very readable, saves a lot of ladder programming, and integrates very well with your lower level machine control routines. Even if you’re an experienced RSLogix 5000 programmer, you’ll find this SFC introduction worth the read. Check it out.

Reading the ControlLogix System Time in Ladder Logic

Since I’m in the tutorial mind-set right now, I thought I’d mention this little gem. Here’s how you can read the ControlLogix (or CompactLogix) PLC system time into a UDT so you can use the current time value in your ladder logic program.

First I created a UDT called “TIME”:

RSLogix 5000 - Read System Time - UDT

Then you just need to use the GSV (Get System Value) instruction with the WallClockTime class, and LocalSystemTime attribute to read the controller’s time into an instance of your UDT (here I created a new tag called LocalDateTime of type TIME). Note that I used the Year element of the LocalDateTime tag as the parameter, because that’s the first address of the tag. It starts writing there but fills in the entire UDT with the time values:

RSLogix 5000 - Read System Time in Ladder Logic

Now you can program your sprinklers to turn on and off in the middle of the night! 🙂


The Five Rung Logic Block

I’m still working on my RSLogix 5000 Tutorial, and I decided to include a brief introduction to a Five Rung Logic Block. If you’re like me, the first time you saw or heard about a five rung, you thought it was was a ludicrous amount of logic just to move a cylinder from one position to another. However, I realized most of my non-five-rung motion control logic always ended up being as complicated, or more complicated, than if I’d used a Five Rung from the beginning, so now I’m a convert.

A five rung block is named after the five coils that make up each block (these names differ between programmers and organizations, but they follow this pattern):

  • Safety
  • Precondition (aka Trigger)
  • Command
  • Complete (aka “In Position”)
  • Fault

Briefly, your safety rung contains all the conditions that must be present during the entire motion. You put conditions like “no critical faults” here. You also put conditions such as interfering axes clear here. For instance, if axis A and axis B should never advance at the same time, then put A Retracted in the safety rung for the five rung that moves axis B to the Advanced position, and vice-versa. This is your “sanity check” logic.

The “trigger”, or “precondition” rung is the signal that will initiate automatic motion. You can put auto conditions directly in this rung, or you can tie in auto mode sequence logic from elsewhere. The difference between the Safety rung and the Trigger rung is that the Safety rung has to stay on during the entire motion, but the Trigger run only has to be true to initiate the motion.

The Command rung is usually a sealed in coil that turns on while the machine should be trying to move this axis to this particular position. Sometimes this is synonymous with an output itself, but in many cases the Command coil will then be used to drive one or more outputs or motion instructions.

The In Position (sometimes called “complete”) rung condition is the logic that interprets the sensors to tell us that we’re actually in the given position. It’s handy to use this coil throughout your program to indicate the state of the machine, rather than looking at the inputs directly. That allows you to invert input logic or add debounce logic later in just one place, rather than everywhere that depends on this machine state.

Finally, the Fault condition is normally a timeout fault. If the command coil is on too long without seeing the In Position signal, then we determine that the motion timed out, and we stop trying. We use this fault in our alarm logic as well.

Once the command condition becomes true, it’s normally sealed in until one of three things happens: the safety condition changes to false, the motion completes and the In Position signal changes to true (normal), or the motion times out and the Fault condition changes to true.

Here’s what a generic five rung logic block looks like. I left an AFI in the trigger rung because you normally come back and fill in that logic later. Note that this is only to move one axis to one position:

RSLogix 5000 Tutorial - Generic Five Rung Logic Block

The “PB” bit in the command rung is the HMI pushbutton that initiates motion to this position while you’re in manual mode. Note that both manual mode motion and auto mode motion have to follow the same safety conditions, but just have different trigger conditions.

You have to customize it for each situation. The contents of the safety and trigger rungs are obviously unique, but the command rung sometimes changes depending on the scenario. You may have more than just manual and automatic modes. Some automatic actions may actually happen when you’re not in automatic mode. Sometimes you may want an axis to “jog” (for instance a hydraulic axis moves slow, and you may want it only to move in manual mode if you hold your finger on the button) and in that case, you couldn’t seal in the Command bit around the manual mode pushbutton (you’d just seal it in around the trigger contact).

There are other variations. I hope you find this useful. If nothing else, when you come across this logic in someone else’s program, you’ll have a better idea of how it works.

Industrial Automation Knowledge Sharing, Revisited

About 6 months ago I blogged about ControlsOverload, a knowledge-sharing Q&A site for the Industrial Automation community: Make More Money with ControlsOverload.

I was pleased to read Bill Lydon’s article in InTech called Putting knowledge to work. Bill shares a story similar to what many of us have experienced:

Early in my career, I ran up against an automation systems software problem I could not figure out. Being the “new guy” in an open office with five other engineers who had experience with these systems, I decided to get their input and explained the problem as best I could. They had suggestions, but no one offered a solution. After working on the problem for a few days, I discovered some software code that only executed under certain system circumstances, which was creating the problem. I changed the code to get the system in the field working properly and wrote an engineering change request.

I proudly shared the solution with the group. One of the most knowledgeable and experienced engineers exclaimed, “I solved that problem months ago!” Pulling out a folder from his desk file drawer, he announced his notes on the solution were, “right here.” I asked why he did not tell me this a few days before, and he responded, “You didn’t ask the right question about this specific code.” I learned this was normal operating procedures with him because he believed withholding knowledge created job security.

That’s exactly the reason why ControlsOverload was created. I’ve seen the impact of withholding knowledge, and it’s painful. Engineers are entering the industrial automation career path all the time and all of them have to re-learn the same lessons over and over because older generations aren’t taking the time to share their knowledge and mentor. It’s the opposite of the behavior that built our civilization. We need to pass these expensive lessons on to the next generation so they can spend more time building better things than we ever could.

One-on-one interaction and teaching is good, but social networking adds an economy of scale that allows knowledge sharing to explode. But somebody has to take the time to seed the field before we get our first harvest.

Beckhoff TwinCAT: a First Impression

Those of you who know me remember I’ve been itching to get my hands on a Beckhoff TwinCAT PLC industrial control system for a few years now, but I never found a project or customer willing to take on the risk of trying something new. Like a bunch of penguins on a cliff of ice, nobody wanted to be the first in the water.

I’ve recently had the opportunity to apply a TwinCAT PLC and HMI system to a relatively low-risk application, so I finally have something to share. Those of you who know me also know I don’t pull my punches when I talk about products, so here’s an honest review of the TwinCAT system: the good, the bad, and the ugly.

Benefits

Cost

What draws you to TwinCAT is its combination of low cost with a ton of features. I can’t publish prices, but I encourage you to compare prices from Beckhoff, Allen-Bradley, Omron, and others. Beckhoff offers a similar or better feature-set, with as many or more communication options than it’s competitors, for significantly less money (it’s real-time PC control instead of dedicated hardware). On specs it can do everything, and doesn’t bend you over the table when it comes to the price.

Scalability

First, Beckhoff TwinCAT PLC takes advantage of a PC CPU and memory, which means you can scale up the performance of your system for a fraction of the cost of scaling up to large PLC and PAC systems.

Secondly, if you use EtherCAT, you get to scale up your I/O sizes without as much impact on performance or price. The individual I/O slices are significantly cheaper than for other bus technologies, and the servo drives are less expensive because the bus speed is fast enough to close the loop from the PLC rather than in the device itself.

Where TwinCAT really shines is when you need to integrate it to the MES layer in your organization. Most other vendors require you to buy expensive OPC or other connection software, but Beckhoff provides a free DLL for Windows applications to communicate with the PLC over a communication protocol called ADS. (They also provide an OPC server as an add-on if you need it.)

What’s the Catch

Switching software vendors is never easy. If you’re used to your existing PLC software like RSLogix, you’ve got a bit of a learning curve ahead of you. Here are the things that down-right annoyed me about TwinCAT when I was coming from the RSLogix/PanelView realm:

  • When you do an online edit, it resets all of your forces. You have to reload your watch list to re-apply them. (Fortunately I always write perfect code the first time, so I never had to do an online change…)
  • The ladder logic editor doesn’t seem to auto-wrap (unless I’m missing a setting somewhere)
  • The HMI editor crashed a couple of times when I was modifying the HMI, and I had to restart the TwinCAT system (not the PC, just the software)
  • The English documention is lacking
  • The TwinCAT System Manager DeviceNET master configuration had a bug where it wouldn’t recognize EDS files where the Class value was stored as a 16 bit, rather than an 8 bit value. The EDS file was valid, according to EZ-EDS, but it wouldn’t read it. Changing the value to 8-bit made it recognize the parameter. This made it hard, but not impossible, to integrate an Allen-Bradley Point I/O DeviceNET node.
  • Even though the HMI will log an alarm history, there’s no alarm history control for you to view it in the HMI itself (it’s saved to a CSV file)

Here are some things that are just really different, but I’ve had no trouble adapting to:

  • You have to define all variables (tags) in a text syntax, instead of in a spreadsheet-like tag editor
  • I/O isn’t mapped directly. You define your hardware I/O and your program I/O and then you map them.
  • You don’t do online changes while you’re online with the PLC. You “log off”, then make your change, and then “log on” and tell it to upload your change. (You can do this while the machine is running.)

What’s Cool

Using a PC based controller opened up some interesting features, particularly in the HMI:

  • You can launch another application from a button click on the HMI (Excel, etc.)
  • You can configure some interesting alarm actions, like launching a program, or sending an email
  • There’s an (optional) add-in to let the PLC communicate in a limited fashion with a SQL Server
  • It runs on commodity hardware. PC hardware is cheap and you can get much larger touch screens for a lot less than the cost of an equivalent PanelView. The “fieldbus” card (EtherCAT) is under $100. Even a Beckhoff EtherCAT to DeviceNET master solution is cheaper than a straight-up DeviceNET card from another manufacturer.
  • TwinSAFE – a Safety Controller sitting right on your fieldbus for less than a stand-alone safety controller from another vendor.

Support

I had nothing but superb support from the local Beckhoff technical support guys during this experience. They respond to emails personally usually within a couple of hours, and they have no problem taking the two hour drive to see us if we have any issues that can’t be resolved over the phone or with email. We’re not a “big” customer in any way.

Summary

I’ve just spent about 2 weeks coming up to speed on TwinCAT, having come from an AB/Omron background, with a smattering of other PC based controllers like Phoenix Contact’s Visual Logic Controller (VLC) and MultiProg (formerly PCWorx).

The ease-of-use of TwinCAT doesn’t measure up to RSLogix, but it’s comparable to Omron, and it blows products like PCWorx/MultiProg out of the water (I think I was using version 3 of PCWorx, and I know there’s a version 5 now). Most of my issues with TwinCAT are that its ladder editor just isn’t polished enough compared with RSLogix, but that’s not surprising given its roots in Germany. North America tends to favor ladder logic, where Europe seems to favor instruction list, structured text, and function block diagram. It feels like the TwinCAT ladder logic editor was neglected a bit, but it is certainly workable. Comparing the TwinCAT sequential function chart editor with the RSLogix sequential function chart editor, they were actually reasonably close. I liked that I didn’t have to position individual steps in TwinCAT – it just auto-arranged everything for me.

Price-wise, TwinCAT is significantly less expensive than any other serious player in the industrial automation equipment space, particularly when you start ramping up the amount of I/O and drives you need, and it has similar (and in some cases better) features to offer. It’s even significantly less expensive than the Phoenix Contact solution (from my memory), and given that, there’s no reason I’d ever recommend a PCWorx/MultiProg solution from Phoenix Contact. The Phoenix offering is worse and more expensive.

If you’re in a position where your facility is in the market for a new automation platform, I definitely recommend including the Beckhoff TwinCAT platform in your deliberations. It hits a really unique sweet-spot on price, flexibility, and features that nobody else seems to be able to offer right now. Now that we’re a few weeks in, we have a running machine, it looks like the project will be successful and I have no reservations about saying: if you don’t mind learning something new, give them a try.

On Readability

Programming both PCs and PLCs sometimes gets me thinking about programming from a higher level. I’ve written a lengthy answer over on StackOverflow about the differences between PC and PLC programming. What I haven’t talked about before is how they are the same.

First, let me define what I mean by PC and PLC programming. By PC programming, I’m generally referring to imperative programming. There are actually two popular PC programming paradigms: imperative and declarative, and the paradigm with new-found popularity, functional, is actually a subset of declarative programming.

How PC and PLC programming are NOT the same

Most PLC programming falls into the declarative category, and most PC programming falls into the imperative category. For examples:

PC:

  • Visual Basic, C/C++, C#: imperative
  • Lisp, F#: functional
  • HTML, XAML: declarative

PLC:

  • Ladder logic: declarative
  • Function block diagram: functional
  • Structured text: imperative

So normally when we talk about the differences between PC and PLC programming, we’re talking about the differences between imperative and declarative programming, but there’s obviously overlap on both sides of the fence.

The major difference, however, is audience. In North America at least, we write PC programs with the expectation that other programmers will have to read, understand and make changes to them, but we write PLC programs with the expectation that people in the maintenance department will be expected to go online with and troubleshoot our programs. Just think about how odd that would be in the PC world: when a word processor crashes, nobody whips out their debugger, figures out what caused the program to crash, makes a fix and continues writing their letter. Primarily this is because the source code doesn’t come with the word processor, but it’s also because the programming language can only be understood by programmers.

How PC and PLC programming ARE the same

When you look at what makes a PC program good or bad, on a high level it’s the same thing that makes a PLC program good or bad: readability. Now as I’ve pointed out, the people who have to read the program is different in each case, but really readability is the fundamental measure by which experienced programmers rate programs.

On the PC side ,the name of the game with readability is modularity. You want to divide up your program into parts, and you want to make those individual parts as self-contained as possible. You want to minimize the interaction to these parts as much as possible. That makes it easier to reason about the program because you’re abstracting away the underlying complexity on each piece and leaving a less complex interface that you can interact with. The entire domain of object oriented programming is an extension of the concept of modularity.

On the PLC side, readability is equivalent to being able to troubleshoot the machine when it’s down. Experienced PLC programmers ask themselves, “if this machine stopped unexpectedly and I had to figure out why it stopped, what would I do? How can I make it easier for someone following that process to figure out what’s wrong with the machine?”

It turns out that most people troubleshooting a machine follow a similar procedure: you start at the outputs and you work your way backwards. You generally have a good idea what the machine is supposed to do next (e.g. move slide A to position B). You can look at the print set, or even the valve itself and figure out what output should be turning on. You look at the indicator on the output card and it’s not on, so the logic isn’t telling it to turn on. You crack open the laptop, and you find that output. You’re looking for one thing: the COIL.

Notice the one big mistake you could make if you’re writing a program: you could use a whole bunch of set and reset (or latch and unlatch) instructions to drive your outputs. Based on my description, you can easily see why that would make the program less readable: which set instruction is the one that’s supposed to be turning on the output right now? If there’s only one, it’s easy, but if there are 10, you’re already lost.

Let’s assume you do find the coil that drives this output. Your next step is to follow the logic back through the rungs, right clicking on the conditions that aren’t satisfied and cross referencing until you understand what the machine is waiting for. What are some obvious mistakes you can make that would hinder this process?

  • Using an integer (or sequencer – yuck!) to store your automatic process step number rather than using individual coils for each step
  • Using set/reset or latch/unlatch instructions more than once on each bit
  • Using really long tag names so readers have to scroll left/right or up/down more than necessary to read one rung
  • Calling subroutines more than once per scan so you can’t see the state of the logic in the subroutine (newer controllers have function blocks where you can drill down into individual instances, which is nice)
  • Using For Loops – same reason
  • Having logic that is conditionally scanned – particularly in controllers where it isn’t obvious if the logic you’re looking at is scanned or not
  • Mapping your inputs or outputs by block copying them to or from a user defined type, word or array (Don’t make the reader start counting bits! The line is down!)

Once you start thinking from the point of view of someone troubleshooting the machine, your perspective on good vs. bad programming really changes. You realize that techniques that seem to save you time while you’re programming end up costing the company hours of lost production time while maintenance picks their way through your cryptic logic.

Next time you’re writing your ladder logic, think of the poor maintenance guy who has to figure out what’s wrong, and try to make his life a little less miserable.