Author Archives: Scott Whitlock

About Scott Whitlock

I'm Scott Whitlock, an "automation enthusiast". By day I'm a PLC and .NET programmer at ETBO Tool & Die Inc., a manufacturer.

Finding Internet-Connected Industrial Automation Devices

I think most people in our industry realize you shouldn’t connect industrial automation devices to the internet, but just in case you happen to think otherwise, here’s a quick explanation why (this is old news, by the way).

You may believe that things connected to the internet are relatively anonymous. There’s no web page linking to them, so how is Google going to find them, right?

It turns out it’s relatively easy to find devices connected to the internet, and it’s kind of like the old movie WarGames where the lead character, played by Matthew Broderick, programmed his computer to dial every phone number in a specific block (555-0001, 555-0002, etc.) and record any where a modem answered. That was called “war-dialing”. In the age of the internet, you just start connecting to common port numbers (web servers are on port 80, etc.) on one IP address at a time, and logging what you find. This is called port-scanning.

It turns out that you don’t even have to do this yourself. One free service called SHODAN does this for you, and records everything it finds at the common port numbers (web servers, FTP servers, SSH daemons, etc.) and lets you search it just like Google. It turns out that (a) most modern industrial equipment is including embedded web servers and/or FTP servers to allow remote maintenance, and (b) most web servers or FTP servers respond with some kind of unique “banner” when you connect to them, announcing who or what they are.

So, if you don’t believe that you shouldn’t be putting industrial automation equipment on the internet, here’s a little experiment you can run:

  1. Take a ControlLogix with an ENBT card and hook it directly to the internet, so it has a real IP address.
  2. Wait a couple of days.
  3. See if your IP address shows up on this SHODAN search page.

You could try the same thing with a Modicon M340.

This query for Phoenix Contact devices is particularly scary, as one of the links is a wind turbine! I was a bit scared once I opened it (it opens a publicly accessible Java applet that’s updating all the data in real-time), so I closed it. There was no password or anything required to open the page. At least the button that says “PLC Config.” appeared to be grayed out. Let’s hope that means it’s protected by some kind of password… and that it’s hardened better than every single major corporation’s website was this year.

Just want to say thanks to DigitalBond for pointing out this SHODAN search for all Advantech/Broadwin WebAccess deployments around the world too.

Google’s Self-Driving Car

Google has been developing a self-driving car.

It’s abnormal for a company to go completely outside of its comfort zone when designing new products, so why would Google go into the automotive control system industry? They claim it’s to improve safety. They’ve also offered the pragmatic assertion that “people just want it”.

Google is a web company. It makes huge gobs of money from advertising, and advertising scales with web traffic. Google’s designing a self-driving car so that drivers can spend their commute time surfing the web on mobile devices. That’s the only explanation that makes sense.

Not that I mind, of course… I’d love to be able to read on the way to work. When can I buy my self-driving car? Can I get one that’s subsidized by ads?

Control System Security Dilemmas

It’s fascinating to watch what’s unfolding in the Industrial Control System Security front these days. Digital Bond’s SCADA Security Portal is as entertaining as any (thanks to ArchestrAnaut for pointing it out for me).

A brief recap:

  • Stuxnet makes news even in the mainstream press
  • Siemens shrugs it off and does absolutely nothing about it
  • Security researchers, smelling smoke, start poking around PLC security and find it completely lacking
  • Details about wide open backdoors inserted into common PLC hardware has now been published online

Things are not moving in a positive direction either. Those security “researchers”, many of whom seem to be selling security solutions, are digging up ways to compromise PLCs and they’re posting all that information online. Now if this forces automation vendors to stop looking the other way and start taking security seriously, then I think it can only be a good move in the long term, but you have to admit it feels a little like a tire salesman throwing roofing nails on the road in front of his store.

All of this makes you wonder, what’s a small manufacturer to do? As always, businesses need to weigh the risks and the costs and act accordingly. This isn’t easy for the decision makers. On one side there’s enormous pressure to network all of the systems together to facilitate the fast flow of information between the ERP, MES, and Plant Floor layers, but on the other side, every interconnection increases the risk of catastrophic failure. I’ve personally seen Windows worms take down automation networks. In the next few years I’m certain we’re going to see worms that can jump from PLC to PLC and probably ones that can cross from Windows to PLC and back.

Properly segregating networks and then managing them is a big IT project. That means it needs scarce resources, and those resources aren’t making money for the company. Big manufacturers have enough cash flow (and have been bitten enough times) that they can allocate resources for this kind of project, but small manufacturers are a different story.

Small companies generally lack the specialists needed to implement such systems. Almost by definition, generalists serve in small companies and specialists gravitate towards large companies. Small companies can only implement commodity solutions (unless it’s part of their core business strength). That means that while we’re all worried about what might happen if a major utility or top tier manufacturer gets hit with an automation security breach, the fact is it’s more likely that small manufacturers will be the first ones hit by a fast-spreading generalized threat. The economic impact could be just as large… those small manufacturers are feeding parts up the supply chain, and in this just-in-time environment it doesn’t take much to cause a major interruption.

What’s the solution?

Short of the automation vendors waking up and making secure products, we need better (and less expensive) tools for securely connecting our PLCs. I hate to say it, but you can’t implement modern control systems without knowing the basics of network security, VLANs, and access control.

The Skyscraper, the Great Wall, and the Tree

One World Trade Center will be a 105 story super-skyscraper when completed. It’s estimated total cost is $3.1 Billion.

When it’s complete, how much would it cost to add the 106th story? Would it be 105th of $3.1 Billion? No. The building was constructed from the ground up to be a certain height. The maximum height depends on the ground it’s sitting on, the foundation, the first floor, and so on. Each of those would need to be strengthened to add one more story.

It’s possible to engineer a building so that you can add more floors later, but that means over-engineering the foundation and the structure.

Contrast the building of a skyscraper with the building of the Great Wall in China. The Great Wall is infinitely scalable. It stretches for many thousands of kilometers, and each additional kilometer probably cost less than the one before because the builders became more skilled as they worked.

The difference between the skyscraper and the Great Wall is all about dependencies. In the case of the skyscraper, every story depends on everything below it. In the case of the Great Wall, each section of wall only depends on the ground underneath, and the sections of wall on either side. If one section is faulty, you can easily demolish it and rebuild it. If one section is in the wrong place, you only have to rebuild a copy of sections to go around the obstacle.

When you’re building the Great Wall, you can probably take an experienced stonemason and have him start laying stone on the first day, without much thought about the design. With a skyscraper, it’s the opposite. You spend years designing before you ever break ground.

When people first started to write software, they built it like the Great Wall. Just start writing code. The program gets longer and longer. Unfortunately, complex programs don’t have linear dependencies. Each part of the code tends to have dependencies on every other part. Out of this came two things: the Waterfall model, and Modular programming. While modular programming was a success at reducing dependencies, and eventually led to object-oriented (and message-oriented) programming, the Waterfall model was eventually supplanted by Agile methodologies.

The software developed by Agile methodologies is neither like the Great Wall (there are still many inter-dependencies) and yet nothing like the traditional skyscraper (because the many dependencies are loosely coupled).

Agile software grows organically, like a tree. You can cut off a limb that’s no longer useful, or grow in a new direction when you must.

Many programmers seem to think that Agile means you don’t have to design. This isn’t true. A tree still has an architecture. It’s just a more flexible architecture than a skyscraper. Instead of building each new layer on top of the layers beneath it, you build a scaffolding structure that supports each module independently.

The scaffolding is made up of the SOLID principles like the single responsibility principle, and dependency injection, but it also consists of external structures like automated unit tests. If Agile software is like a tree, you spend more time on the trunk than the branches and more time on the branches than the leaves. The advantage is that none of the leaves are directly connected, so you can move them. That’s what it means to be Agile.

What are the trade-offs? Agile enthusiasts seem to think that every other methodology is dead, but that’s not true. You can still go higher, faster, with a skyscraper, or get started sooner with the Great Wall.

Embedded control systems for automobiles are still built like a skyscraper. That’s because the requirements should be very well known, and future changes are unlikely. On the other hand, you’d be surprised how many businesses run significant portions of their businesses on top of Excel spreadsheets. That’s typical Great Wall methodology. It works as long as complexity stays low.

Use Agile when you need complexity and flexibility.

Editing Logic is Business Logic

If you’re writing an “n-tier” application then you typically split your application into layers, like the “GUI layer”, “business logic layer”, and “data layer”. Even within those layers we typically have more divisions.

One of the advantages of this architecture is that you can swap out your outer layers. You should be able to switch from SQL Server to MySQL and all your changes should be confined to the data layer. You should be able to run a web interface in parallel with your desktop GUI just by writing another set of modules in the GUI layer.

This all sounds great in principle, so you start building your application and you run into problems. Here’s the typical scenario:

Business rule: one salesperson can be associated with one or more regions.

Business rule: each salesperson has exactly one “home region”, which must be one of their associated regions.

Now you go and build your entity model, and it probably looks like a SalesPerson class that inherits from the Employee class. There’s a many-to-many association between SalesPerson and Region. The relationship class (SalesPersonRegion) might have a boolean property called HomeRegion (there are other ways to model it, but that’s a typical and simple way).

Now, you can enforce these constraints on your entity model by making sure that you provide at least one Region in your SalesPerson constructor, and you make that the home region by default. You provide a SetHomeRegion method that throws an exception if you pass in a non-associated region, and you throw an exception if you try to dissociate the home region. If you do all this well, then your model is always consistent, and you can reason about your model knowing certain “invariant” conditions (as they’re called) are always true.

All of that, by the way, lives in the Domain Model, which is part of your Business Logic Layer.

Now let’s build our GUI where we can edit a SalesPerson. Obviously the GUI stuff goes in the GUI layer. Depending on your architecture, you might split up your GUI layer into “View” and “ViewModel” layers (if you use WPF) or “View” and “Controller” layers (if you’re using ASP.NET MVC).

Now unless you really don’t care about your users, this edit screen is going to have a Save and a Cancel button. The user is typically free to manipulate data on the screen in ways that don’t necessarily make sense to our business rules, but that’s OK as long as we don’t let them save until they get it right. Plus, they can just cancel and throw away all their edits.

My first mistake when I ran into this situation was assuming that I could just use the entities as my “scratch-pad” objects, letting the user manipulate them. In fact the powerful data binding features of WPF makes you think you can get away with this. Everyone wants to just bind their WPF controls directly to their entity model. It doesn’t work. Ok, let me be a little more specific… It works great for readonly screens, but it doesn’t work for editing screens.

To do this right, you need another model of the data during the editing process. This other model has slightly different business rules than the entity model you already wrote. For instance, in our example above about editing a SalesPerson, maybe you want to displays a list of possible regions on the right, let the user drag associated regions into a list on the left, drag out regions to dissociate them, and perhaps each region on the left has a radio button next to it allowing you to select the home region. During the editing process it might be perfectly legitimate to have no associated regions, or no home region selected. Unfortunately if you try to use your existing entity model to hold your data structure in the background, you can’t model this accurately.

There’s more than one way to solve this. Some programmers move the business rules out of the entity model and into a Validation service. Therefore your entity model allows you to build an invalid model, but you can’t commit those changes into the model unless it passes the validation. This approach has two problems: one, your entity model no longer enforces the “invariant” conditions that make it a little easier to work with (you have to test for validity every time someone hands you an entity), and two, certain data access layers, like NHibernate, automatically default to a mode where all entity changes are immediately candidates for flushing to the database. Using an entity you loaded from the database as a scratch pad for edits gets complicated. Technically you can disconnect it from the database, but then it gets confusing… is this entity real or a scratchpad?

Some other programmers move this logic into the ViewModel or Controller (both of which are in the GUI layer). This is where it gets ugly, for me. It’s true that the “edit model” is better separated from the “entity model”, but somewhere there has to be logic that maps from one to the other. When we load up our screen for editing a SalesPerson, we need to map the SalesPerson entity and relationships into the edit model, and when the user clicks Save, first we have to validate that the edit model can be successfully mapped into the entity model, and if it can’t then give the user a friendly hint about what needs to be fixed. If it can be mapped, then we do it. Do you see the problem? The sticky point is the Validation step. Validation logic is business logic. Most MVVM implementations I’ve seen essentially make the ViewModel the de-facto edit model, and they stuff the validation logic into the ViewModel as well. I think this is wrong. Business logic doesn’t belong in the GUI layer.

All of the following belongs in the business logic layer: the entity model, the edit model, the mapping logic from one to the other and back, and the validation logic.

If you follow this method, then WPF’s data binding starts to make sense again. You can map directly to the entity model if you have a read-only screen, and you can map to the edit model for your editing screens. (In my opinion, many situations call for both an “add model” and an “edit model” because the allowable actions when you’re adding something might be different than when you’re editing one that already exists… but that’s not a big deal.)

I apologize if this is all obvious to you, but I really wish someone would have explained this to me sooner. 🙂

Outdoor G-Scale Garden Railroad: Detecting the Trains

A little while ago I told you about an ongoing project to automate my parents’ G-Scale outdoor model railroad. Today I’m going to add a bit more detail about the solution: specifically, how do you sense the location of the trains?

Layout

The layout is broken into 3 lines: a “Figure 8” line, a “Point-to-Point” line, and a “Main Line”, which has various track switches, sidings, etc. The interesting thing about the Figure 8 and Point-to-Point lines is that they cross, and one of the goals is to prevent trains from colliding.

Some other goals include:

  • The point-to-point should start at one station, move to the other station, stopping out of site for a while. It should stop at each end-station for an adjustable period of time, and return.
  • The Point-to-Point and Figure 8 lines have uphill and downhill sections, so the speed needs to be varied in these sections.
  • The Figure 8 line has two programmable stops, but the train shouldn’t necessarily stop at both stops every time.
  • We want to run multiple trains on the main line. Some will be stopped on a siding while others are running, and then they will switch.
  • All tracks need the ability to be manually operated.

The Figure 8 Line from behind "Stan's Speed Shop"

Power

All existing locomotives in the system use “track power” (DC voltage applied across the two rails of the track). The voltage applied to the track is applied to the motors in the locomotives, and this controls the speed.

There are some advantages to this: it allows you to run “stock” (unmodified) engines, and it’s compatible if someone wants to bring over a “guest” engine (either a track powered or battery powered model). It’s also compatible with “DCC” controlled locomotives which, as I understand it, are backwards compatible with track powered systems.

Control?

Whether you use a PLC or a PC for control, being able to control the voltage to a track (to control a motor speed) is pretty much a solved problem, so let’s assume we can do that for now. The main problem is location sensing. In order to tell the locomotives to stop, wait, start, etc., you need to know where they are.

Your first thought, as a controls engineer, is some kind of proximity sensor. Unfortunately there are some significant problems with this:

  • Metal-sensing proxes are expensive, and the locomotives are mostly plastic. We’re trying to avoid retrofitting all locomotives here. You might be able to sense the wheels.
  • Photo-sensitive (infrared) detectors, either retro-reflective or thru-beam type, are popular on indoor layouts, but they apparently don’t work well outdoors because sunlight floods them with infrared.
  • Reed switches are popular, but you need to fit all your engines with magnets, and they are a bit flaky. Magnets and reed switches actually work well if you have the magnet on the track, and the reed switch mounted to the engine, in order to trigger whistles, etc., but even then they’re not 100% reliable, in our experience.
  • All proximity detection strategies require you to run two wires to every sensor, which is a lot of extra wiring. Remember, there are lots of little critters running around these layouts, and they tend to gnaw on wires. Fewer wires is better.
  • Having sensors out in the layout itself means you’re exposing electrical equipment to an outdoor environment. At least you can take the locomotives in at night, but the sensors have to live out there year-long. I’m a bit concerned by that thought.

Solution: Block Occupancy Detection

The solution we found was a technique called “block occupancy detection.” This is a fairly common method of detection in some layouts. A couple of years ago, I built a simple controller that solved the crossing detection problem between the Figure 8 and Point-to-Point lines using block detection to know where the trains were. It worked great, so we decided to use it for the entire system.

Here’s how it works: you divide up your track layout into “blocks”. Blocks can be any size, but they are typically anywhere from about 4 feet long, up to the length of a train, or a bit more. One rail on the line is “common” and isn’t broken up. The other rail is the one you cut into electrically isolated sections.

So, the wire from the common side of your speed controller goes directly to the common rail, as it did before. However, you have to split the “hot” side of your speed controller into as many circuits as you have blocks. Each block is fed from a separate circuit, which means you have to run a “home run” wire from each block back to your power supply.

Then, the “block occupancy detection” circuit is wired in series with each block circuit (between the speed controller and the block). Here’s what one block detection circuit looks like:

This is an interesting circuit. On the left you can see a bridge rectifier, with the + and – terminals curiously shorted out. This is a hacked use of the device. All we really care about is that we want to create a voltage drop across the device when current is flowing through the wire to that block. One diode creates a voltage drop of 0.6 to 0.7 V, and the way we’ve wired it, whether the speed controller is in forward or reverse, the current always has to take a path through two forward-biased diodes. That means, when current is flowing to the block (i.e. there’s an engine in that block) then we get a voltage drop of 1.2 to 1.4 V across this device (or -1.2 to -1.4 V if it’s in reverse). A standard bridge rectifier is just a handy component to use because it’s readily available in high current ratings for a couple of dollars each.

We’re using that constant voltage to drive the input side of an LTV-824 opto-isolator chip. Notice that it’s a bi-directional opto-isolator, so it works in forward or reverse too. On the output side of the opto-isolator, we can run that directly into a PLC input (the input we’re working with here is sourcing and has a pull-up resistor built-in).

If you’re using a regular straight-DC analog controller, that’s all you need, but in this case we’re using a pulse-width-modulated (PWM) speed controller. That means the PLC input is actually pulsing on and off many times a second, and if you’re at a slow speed (low duty cycle), then the PLC may not pick up the signal during it’s I/O scan. For that reason, I found that sticking a 1 uF capacitor across the output will hold the PLC input voltage low long enough for it to be detected quite reliably. This, of course, depends on your pull-up resistor, so a little bigger capacitor might work better too.

Filtering in the PLC

This worked quite well, but needed a bit of filtering of the signal in the PLC. The input isn’t always on 100% while the locomotive is in the block, so once a block is latched as “occupied”, I use a 1-second timer of not seeing the input on before I decide that the block is clear.

I also have to see an adjacent block occupied before I clear a block. That solves the problem of “remembering” where an engine is when it stops on the track and there’s no longer any current flowing to that block.

Of course, this means you can end up with “ghosts” (occupied blocks that are no longer truly occupied because someone picked up a locomotive and physically moved it). I provided some “ghost-buster” screens where you can go in and manually clear occupied blocks in that case.

Pros and Cons

I like this solution for several reasons: all the electronics are at the control panel, not out in the field (except the wires to each block, and the track itself). Also, the components for one block detector are relatively inexpensive, and we’re working on a bit of a budget here (it is a hobby, after all). I think reliability and simplicity also fall into the Pro column. As long as you can get a locomotive to move on a segment of track (the track needs cleaning from time to time), then the PLC should be able to detect it. You don’t need to deal with dirty photo-detectors or extra sensor wires. The same wire that carries the current to the block carries the signal that the block is occupied.

On the other hand, there are some negatives. This system, as designed, has 21 different blocks, which means 21 home-run wires, buried in the ground, in addition to the commons (plus the track switch wires, but that’s a story for another post). Also on the negative side, you don’t get 100% accurate position sensing. Actually, you get pretty accurate sensing at the edges of the blocks (you’re pretty sure you know where the locomotive is the moment is crosses from one block to the next), but you’re not sure where it is in the middle of the block.

You do have to make other compromises in the track system. There are some accessories (like lighted end-stops) that draw power from the track. This current draw makes that particular block show up as occupied all the time. You either have to modify the accessory to use battery power, or you have to run extra wires to that accessory.

You also have to take the length of train into account. You know which blocks are occupied by current-drawing locomotives and cars (like lighted observation cars and caboose’s), but not every car draws power. Your design and control system needs to take into account whether or not your train will occupy more than one block at once, and whether the end of the train will be detected. This is most important when trying to run multiple trains on one track, where you want the back train to avoid running into the end of the first train.

Next

I hope that’s been educational. 🙂 I’m still not done programming the PLC, and I’m waiting for a component to arrive for the throttle controller right now. I’ll post more information over the next few weeks.

Sometimes it’s Better to Repeat Yourself

In programming, we have a principle called Don’t Repeat Yourself (DRY). It’s a very important idea, and I’d argue that most of the advances in programming environments over the years have been in support of this principle and its related principle, Once and Only Once (OAOO).

Unfortunately, like every “principle”, it eventually takes on the level of dogma, and the people spouting it sometimes forget why it exists. These principles aren’t ends in themselves; they’re not self-justified. They are general principles to follow, but only when they support the end-goal of solving problems in more efficient, and more maintainable ways.

Let me give you a very simplified example of how it can be carried to far. Consider the following declarations in C#:

const int MOTOR_1_START_TIMEOUT_MS = 5000;
const int MOTOR_2_START_TIMEOUT_MS = 5000;

Consider that I could write:

const int MOTOR_1_START_TIMEOUT_MS = 5000;
const int MOTOR_2_START_TIMEOUT_MS = MOTOR_1_START_TIMEOUT_MS;

or…

const int MASTER_MOTOR_TIMEOUT_MS = 5000;
const int MOTOR_1_START_TIMEOUT_MS = MASTER_MOTOR_TIMEOUT_MS;
const int MOTOR_2_START_TIMEOUT_MS = MASTER_MOTOR_TIMEOUT_MS;

Notice that all 3 versions accomplish the same end-result, but they are semantically different. The first version means that the two motors have independent timeout values, and they’re just co-incidentally the same. The second says, “motor 2’s timeout must be the same as motor 1’s timeout.” The third says that both motors must have the same timeout.

In my opinion, any of these three versions might be correct for various systems involving two motors. However, if you follow the DRY principle without thinking about it, you’ll assert that the first version is incorrect. In fact they’d probably say the only correct version should be:

const int MOTOR_TIMEOUT_MS = 5000;

(…ignoring, for the moment, that it should probably be a configurable value rather than a constant.)

Why does this simple example matter? Consider the case of a PLC-based control system with 10 motors. Let’s say at the start that all the motors, and all the drives running them, are identical. If you’re familiar with my philosophy of PLC programming, you know that my default solution for this would be to have 10 ladder logic routines, each called MOTOR_01, MOTOR_02, etc. Each routine would basically be a copy. That really doesn’t follow the DRY principle, does it? Certainly no, not at face value.

You might not believe it, but I get the occasional “hate mail” to my blog’s email address because of some of my technical opinions here. The most recent one, comically, referred to me (and all PLC programmers for that matter) as “dinosaurs”. I’m not sure what the rest of the message said, because if you can’t be polite, I’m not going to bother listening to you. However, I believe it’s this flagrant violation of things like the DRY principle that really rubs traditional PC programmers the wrong way when you start to talk about the principles of PLC programming.

Of course, my views about PLC programming are just that – general principles that need to be evaluated in the light of each and every project. I’m just asserting that most of the time you should be following a principle of a one-to-one mapping between ladder logic and real-world hardware. That doesn’t mean it’s an unbreakable rule.

Going back to the 10 motor example, the way you structure your program should be based on a decision you make about anticipated future changes to the system.

If you write one generic routine for controlling a motor, and you call it 10 times, you’re saying, “I always expect all 10 of these motors to behave in an identical way for all of the future.” Of course, you can allow variations, but you have to do that by passing in parameters for each instance. You have to be explicit about what can vary. Adding new parameters is typically a harder task than just modifying one of the 10 existing motor routines when you need to change the behavior of one motor.

On the other hand, if you follow my principle of 10 motor routines for 10 motors, you’re saying, “I expect that we’ll rarely need to make a sweeping change to all 10 motor control routines, but that we are likely to modify one or two routines to make them perform differently than the others.” I personally believe this is usually closer to the truth. As a system ages, perhaps one motor drive will blow, and you can’t buy the original drive anymore, so you have to replace it with a new one that has different control signals. That’s a fairly typical scenario, in my experience. Also, even though you might have 10 identical drives and motors, the process may or may not be identical for each motor. They may perform vastly different functions, and it’s likely that you’ll want to change just one or two of them to access more advanced features of the drive when you refine the process. Of course, I also like that with a one-to-one mapping in a PLC, troubleshooting becomes much easier because with online monitoring you can see each control routine executing just for that motor. You can make temporary changes just to one motor routine to bypass a faulted drive, or to do a million other changes that you’ll never be able to predict when you’re writing the logic.

The fact is, we’re physically limited by the number of drives we have. The amount of time it takes to make a change to all 10 motor control routines is tiny compared to how long it takes to make physical changes to 10 drives. This effort scales with the size of the system. In PC programming, you can have a system with millions, even billions, of objects, but in the PLC world, you’re limited by physical reality. The consequences of repeating yourself aren’t always as great, and you need to take that into account, and weigh it against your other goals.

That doesn’t mean I can’t imagine a case where you really want to assert that the motors all have to operate identically, all of the time, forever in the future. There are systems with load sharing drives where the system wouldn’t operate if you mismatched the drives or motors. That’s a design decision you have to make. Principles are only there for guidance, but they are not absolute rules, and they shouldn’t be treated that way.

Why you should be against Online Voting

So Canada wants to implement online voting. In case you didn’t already know why, here’s why you should be against it.

Vote Selling

If you can cast your vote online from any computer, then you can do it with someone looking over your shoulder. That means you can sell your vote. That means employers can favour employees who actually voted a certain way. One of the best features of our current paper and pencil method is that you can’t sell your vote.

Realistically you *could* sell your vote right now using mail-in cards, but I’m against mail-in votes too, for this reason. At least in that case, you know most people don’t do it.

Easy to Manipulate

Lets assume for a moment that the servers that Elections Canada sets up don’t have any security flaws (unbelievable). At any rate, you still can’t trust the election results because a lot of peoples’ home computers are compromised by botnets. That means there’s malicious code running on millions of computers, and in most cases those computers are available for “rent” to the highest bidder. Once you’ve rented access on those computers, you can run any program you like.

Now, do you think a secure internet connection (using HTTPS) is really secure? In most cases the connection over the internet is secure (stops eavesdroppers), but if someone has access to your computer at home, they’re past the security. If they can run an arbitrary program on your computer, they can manipulate pretty much anything.

For example, lets say you wanted to make clicks for one candidate actually get counted for another. You can do that. It’s called ClickJacking.

That’s just one example. If you have access to the computer, you can recalibrate the mouse (or touchscreen on newer computers). You can capture, log, and report on the user’s keystrokes.

Analogy to Online Banking

People try to counter this argument with analogies to the security of online banking, but that’s flawed. People’s bank accounts do get hijacked using methods like these all the time. The bank account gets cleaned out, and usually the bank refunds the money to the consumer and the loss comes out of their profits. As long as fraud isn’t too high, they can tolerate this. In online elections, you wouldn’t know if your vote had been highjacked. We would just end up with a fraudulent election.

Bottom line

Don’t support online voting, and make sure to explain to everyone else why they shouldn’t support it either. The fact that “the head of the agency in charge of federal elections” thinks this is a good idea means Marc Mayrand obviously doesn’t understand the serious problems inherent in online voting.

Edit: Further reading.

Sneak Peak: Outdoor Model Railroad Automation

Several years ago my parents decided they were going to build a Garden Railroad in their back yard. It’s been an ongoing hobby project since then, and it’s been growing substantially every year:

West Station on the Point to Point Line

This is “G Scale” (around 1:22.5 scale) outdoor model railroading. It’s really a combination of three things: model railroading, building miniatures, and gardening. This recent weekend was the local club’s open-house day, and I was invited along to see many of the layouts. Each layout kind of emphasizes its own focus: some are more focused on gardening, others on the miniatures, etc.

Anyone who does anything remotely related to computers knows that every relative you have thinks you know everything there is to know about computers, and you’re destined to spend the rest of your family holidays removing spyware and running ccleaner on their computers, not to mention reassuring them that it’s OK to reboot the computer to see if the problem goes away.

Being in industrial automation, though, you never get people asking you to automate something; it’s just a little bit too abstract for most people to grasp. However, when your parents get themselves a model train set, they may not know exactly what you do for a living, but they certainly know that if you can program conveyors, robots, and cranes, you should be able to figure out how to make their trains do what they want them to do. Automatically.

Of course, as a control systems guy, you can’t look at your parents’ 24V model train set and not think about how you’d hook up a PLC to it. Especially when they offer to finance the project.

I’m happy to report that we’re progressing well. The goal is to have it running in fully automatic mode before the end of the month. I’ll post some pictures, hopefully some videos, and some technical information about how it was accomplished. Stay tuned.

I Didn’t Learn This in School

Last year I went to my 10 year university reunion. The further I get from graduation, the more I have to discount the value of what I learned there.

Don’t get me wrong, a solid base knowledge in the fundamentals of electronics and some algorithm & data structure knowledge has gotten me out of some tight jams. However, a few days ago my father looked at something I was working on and said, “I guess you did learn something in school!” It was one of those “all-Greek-to-me” comments, but my own reaction was, “I didn’t learn any of this in school.” That startled me a bit. I’ve always argued the opposite, but there it was staring me in the face: most of my job consisted of applying knowledge I’ve learned after school.

When I started thinking about why what I do now didn’t relate to what I learned in school, I realized it’s because the choices you make in real life have longer term consequences.

In university, all of your projects are of very short duration. A term is only 4 months. You have to be able to start a project and complete it in that timeframe. However, at the end of that 4 month project, you throw out the result and start fresh on the next batch of projects. This is fundamentally different than the real world. Every day I deal with the consequences of decisions that I or someone else made years ago.

Over the course of your career you gain experience. As an Engineer or programmer, you learn to generalize. You learn to avoid commitment because you realize how much customers, bosses, and everyone else love to change their minds. Unfortunately you can’t hold off forever. You have to make choices, and I’ve realized a lot of the choices I make are based on my gut feel about what’s likely to change in the future and what isn’t.

For instance, if you need to add a fault timer for a motion, does that go near the logic that controls the motion, or in the fault logic routine? You want to keep things that are likely to change at the same time together. Is it more likely for someone to change the fault timer at the same time that they change other fault logic, or is it more likely that they change the fault timer when they modify the motion logic (personally I think it’s the latter, but it’s not cut-and-dried)?

Another example (acknowledgement to Reg Braithwaite): imagine you’re designing a Monopoly computer game and you’ve chosen to use an Object Oriented design. Traditional OO would suggest that you have a Property class, with a subclass for each concrete property (Baltic, Pennsylvania…), and that each concrete Property has a PurchasePrice value. But does it make sense that the definitions of the prices are distributed among all the different concrete properties? Isn’t it more likely that a rule change, or alternate set of rules would affect all property prices at once? So property prices should be defined in some other class. Unfortunately if you move all the property-related rules to their own classes, then each class has to know about the list of properties. What happens when you want to provide a regional variant of the game with different property names, different currencies, or even a larger board with more properties? Then you have to update all kinds of places because so much depends on the list of properties.

That’s when you have to ask, “what’s more likely to change?” You’re about to make a decision that’s going to pay-off or cost you in the future.

In school you never face this dilemma. You never have to choose the lesser-of-many-suboptimal-choices and live with the consequences of that choice. In the real world you face it every day. The consequences guide the choices you make next time, and so on. Every novice looks at a PLC program and thinks it’s too complicated. Every experienced PLC programmer tries to follow common practices, templates, and guidelines they’ve learned throughout their career because they’ve learned from the consequences of not doing that.

If we could adjust the education system just a bit, maybe students need to have a project that spans multiple terms, and even multiple years. Every term should build on the work you did last time. Every student in a class is given different objectives to achieve every term, and those objectives are assigned randomly. By the end of 4 years, they’ll learn how their choices in first year affected their ability to complete their objectives in fourth year. Then, I think, they’ll be a little more prepared for a career.