Author Archives: Scott Whitlock

About Scott Whitlock

I'm Scott Whitlock, an "automation enthusiast". By day I'm a PLC and .NET programmer at ETBO Tool & Die Inc., a manufacturer.

What can I do about our global resource problems?

On Saturday I posted Hacking the Free Market. You may have noticed the “deep-thoughts” tag I attached… that’s just a catch-all tag I use to warn readers that I’m headed off-topic in some kind of meandering way. In this post, I want to follow along with that post, but bring the discussion back to the topic of automation and engineering.

To summarize my previous post:

  • We haven’t solved the problem of how to manage global resources like the atmosphere and the oceans
  • The market isn’t factoring in the risk of future problems into the cost of the products we derive from these resources
  • I can’t think of a solution

Just to put some background around it, I wrote that post after reading an article titled “Engineers: It’s Time to Work Together and Save the World” by Joshua M. Pearce, PhD, in the March/April 2011 issue of Engineering Dimensions. In the article, Dr. Pearce is asking all of Ontario’s engineers to give up one to four hours of their spare time every week to tackle the problem of climate change in small ways. His example is to retrofit the pop machines in your office with microcontrollers hooked to motion sensors that will turn off the lights and turn off the compressor at times when nobody is around. He offers spreadsheets on his website which let you calculate if there’s a payback.

Now I’m not an economist, but I’m pretty sure that not only would this not help the problem of dwindling resources, but unless we also start factoring the future cost of fossil fuel usage into the cost of the energy, these actions that Dr. Pearce is suggesting will make the situation worse. When we use technology to utilize a resource more efficiently, we increase the demand for that resource. This is a fundamental principle.

I’m not saying it isn’t a good thing to do. It’s a great thing to do for the economy. Finding more efficient ways to utilize resources is what drives the expansion of the economy, but it’s also driving more demand for our resources. The fact that we’re mass marketing energy conservation as the solution to our resource problems is a blatant lie, and yet it’s a lie I hear more and more.

That’s where I was at when I wrote the previous article. “How can I, a Professional Engineer and Automation Enthusiast, do something that can make a real difference for our global resource problems?”

I’m afraid the answer I’ve come up with is, “nothing”. My entire job, my entire career, and all of my training… indeed my entire psychology… is driven towards optimizing systems. I make machines that transform raw materials into finished products, and I make them faster, more efficient, and more powerful. I don’t know who is going to solve our global resource problems, but I don’t think it’s going to be someone in my line of work. It’s like asking a fish to climb Mt. Everest.

I think the solution lies somewhere in the hands of politicians, lawyers, and voters. We do a so-so job of managing resources on a national scale, but we’d better extend that knowledge to a global scale, and do it quick. There might even be technical solutions that will help, but I think these will come from the fields of biotechnology, nanotechnology, and material science, not automation.

In the mean time, I’m going to continue blogging, contributing to online Q&A sites, writing tutorials, and writing open source software and releasing it for free, because I believe these activities contribute a net positive value to the world. If you’re an automation enthusiast or engineer reading this, I urge you to consider doing something similar. It is rewarding.

Hacking the Free Market

Like any system, the free market works very well within certain bounds, but it breaks down when you try to use it outside of those constraints.

The free market works when all of the parties in the system are “intelligent agents”. For the purposes of definition, we’ll call them “adult humans”. An adult human is free to enter into transactions within the system with other adult humans. The system works because the only transactions that are permitted are ones in which both parties to the transaction benefit. So, a plumber can fix an electrician’s drain, and an electrician can fix the plumber’s wiring, and they both benefit from the transaction. The introduction of currency makes this work even better.

In fact, if one party in a transaction ends up with less, we usually make that transaction illegal. It usually falls into the category of coercion, extortion, or theft. Nobody can put a gun to another person’s head and demand money “in exchange for their life” because that person had their life before the transaction started.

Still, we find ways to hack the system. Debt is one obvious circumvention. Normally debt is a transaction involving you, someone else, and your future selves. If you take out a student loan, go to school, pay back the student loan, and get a better job, then everyone benefits, and it’s an example of “good debt”. Likewise, if you need a car loan to get a car to get a better job, it’s “good debt” (depending on how much you splurged on the car). Mortgages are similarly structured. Consumer debt (aka “bad debt”), on the other hand, is typically a circumvention of the free market. However, the person who gets screwed is your future self, so it’s morally ambiguous at worst.

The free market can also be hacked through the exploitation of common resources. For instance, if I started a business that sucked all the oxygen from the air, liquefied it, and then I sold it back to the general public for their personal use, I doubt I’d be in business very long. I might as well be building a giant laser on the moon. Similarly, the last few decades were filled with stories of large companies being sued in class action lawsuits for dumping toxic chemicals into streams, rivers or lakes and poisoning the local water supply. Using up a common resource for your own benefit is a kind of “free market hack”. If a third party can prove they were the targets of a “free market hack”, the courts have ruled that they are entitled to compensation.

Still, we hack the free market all the time. The latest credit scandal is just one example. The general public was out of pocket because of a transaction many of them weren’t a party to. It really is criminal.

A larger concern is the management of natural resources. This includes everything from fossil fuels, to timber, fresh water, fish stocks, and, most recently, the atmosphere. The latter opens a new set of problems. All of the other resources are (or can be) nationally managed. Canada, for instance, while it has allowed over-fishing to take place on the Grand Banks, has exercised systems that reduce the quotas in an attempt to manage the dwindling resource. This makes those resources more expensive, so the free market can adjust to the real cost of these resources. The atmosphere, on the other hand, is a globally shared resource with no global body capable of regulating it.

I don’t want to get into some climate change debate here, so let’s look at it from a higher level. Whenever we have a common resource we always over-exploit it until, as a society, we put regulations in place for managing it. In cases where we didn’t (Easter Island comes to mind), we use up the resource completely with catastrophic results.

I realize the atmosphere is vast, but it’s not limitless. While everyone’s very concerned with fossil fuel use, if it was only about dwindling reserves of fossil fuels, it wouldn’t be a problem. The free market would take care of making fossil fuels more expensive as they start to run out, and other energy sources would take their place. However, when we burn fossil fuels, the energy we get is coming from a reaction between the fuel and oxygen in the atmosphere. The simple model is that oxygen is converted into carbon dioxide. Some of the potential energy of the reaction comes from the atmosphere, not just the fuel. We need to run that carbon dioxide through plants again to get the energy (and oxygen) back. Of course, that would take much, much longer (and more work) to do than the energy that we actually get out of the reaction.

If climate scientists are right, then we’re also causing a serious amount of harm to the climate at the same time. This is a debt that will continue accruing interest long after we’re dead. I recognize the uncertainty of the future, but as any good risk manager knows, you shouldn’t gamble more than you can afford to lose.

This is a free market hack because we treat it like a perpetual motion machine, but we’re really just sapping energy from a big flywheel. Like debt, whether this is “good” or “bad” depends on whether we can turn that consumption into something even more valuable for the future. In general (but not every case), every generation before us left us a better world than the one they inhabited. Most of the value we have now is knowledge passed from generation to generation. It costs comparatively little to pass information forward than the value we gain from the access to that information. Therefore, even if our ancestors used up resources, they couldn’t do it on a scale big enough to offset the value of the knowledge they were passing forward. It’s ironic if that knowledge let us build a big enough lever to tip the scales.

It seems pretty certain that if we fail to leave the future generations more value than we’re taking from them, they’ll make us pay. They’ll turn on the companies and families that profited at their expense, and they’ll either sue for damages, or drag the bodies of the CEO’s through the streets behind camels, depending on which area of the world they live in.

Personally I’d prefer prevention over retribution. The problem is that if there really is a future cost to our actions, the market isn’t factoring that into the price. Even though companies are accruing the risk of a large future liability, they don’t have to account for this risk in their balance sheet. That’s because while future generations have a stake in this situation, they don’t have a voicenow. Nobody’s appointed to represent their interests. That’s why the free market continues to be hacked, at their expense.

How could you structure such a system? Should companies be forced to account for estimated future liabilities, so it shows up on their P&L statements? Do we allow them to account for true value that they’re passing forward, like new technologies they’ve developed, which can help to offset the liabilities they’re incurring? Obviously that’s impractical. Not only does it become a bureaucratic nightmare, but there’s still no international body to oversee the principles of the accounting system.

Could an appointed legal representative of future generations sue us for the present value of future damages we’re risking? Can they spend the proceeds of the lawsuits on restorative projects? (Investments with a big dividend but which don’t pay back for 100 years, like reforesting the Sahara.) I doubt a non-existent group of people can sue anybody, so I doubt that’s a workable solution either.

I’m afraid the solution lies in the hands of politicians, and that makes me sad. We need a global body that can manage natural resources, including the atmosphere. At this point, a political solution seems just as impossible.

I’m still looking for ideas and solutions. I want to contribute to a workable solution, but my compass isn’t working. If you know the way, I’m listening.

(Hint: the solution isn’t energy efficiency.)

To be honest, we could still be producing knowledge at such a huge rate that we’re still passing forward more value than we’re taking from future generations. But I don’t think anyone’s keeping track, and that’s pretty scary.

Anyway, Earth hour’s about to start. While it’s a really silly gesture to turn your lights out for an hour in “support” of a planet that we continue to rape and pillage the other 8759 hours of the year, I think I’ll give this stuff another hour of thought. It’s literally the least I can do.

Edit: I posted a follow-up to this article called What can I do about our global resource problems?

From Automation to Fabrication?

Here’s my simplified idea of what factories do: they make a whole lot of copies of one thing really cheap.

The “really cheap” part only comes with scale. Factories don’t make “a few” anything. They’re dependent on a mass market economy. Things need to be cheap enough for the mass market to buy them, but they also need to change constantly. As humans, we have an appetite for novelty. As the speed of innovation increases, factories spend more time retooling as a result.

The result is more demand for flexibility in automation. Just look at the rise in Flexible Automation and, more recently, Robotic Machining.

Where does this trend point? We’ve already seen low cost small scale fabrication machines popping up, like MakerBot and CandyFab. These are specialized versions of 3D Printers. Digital sculptors can design their sculpture in software, press print, and voila, the machine prints a copy of their object.

Now imagine a machine that’s part 3D Printer, 6-axis robot, laser cutter/etcher, and circuit board fabricator all-in-one. Imagine our little machine has conveyors feeding it stock parts from a warehouse out back, just waiting for someone to download a new design.

That kind of “fabrication” machine would be a designer’s dream. In fact, I don’t think our current pool of designers could keep up with demand. Everyone could take part in design and expression.

I don’t see any reason why this fictional machine is actually beyond our technological capabilities. It’s certainly very expensive to develop (I’m going to take a stab and say it’s roughly a complex as building an auto plant), but once you’ve made one, we have the capability to make many more.

For more ideas, take a look at what MIT’s Fab Lab is working on.

Renaming “Best Practices”

Ok, so I’ve complained about “Best Practices” before, but I want to revisit the topic and talk about another angle. I think the reason we go astray with “Best Practices” is the name. Best. That’s pretty absolute. How can you argue with that? How can any other way of doing it be better than the “Best” way?

Of course there are always better ways to do things. If we don’t figure them out, our competitors will. We should call these standards Baseline Practices. They represent a process for performing a task with a known performance curve. What we should be telling employees is, “I don’t care what process you use, as long as it performs at least as well as this.” That will encourage innovation. When we find better ways, that new way becomes the new baseline.

In case you haven’t read Zen and the Art of Motorcycle Maintenance, and its sequel, Lila, Pirsig describes two forms of quality: static and dynamic. Static quality are the things like procedures, and cultural norms. They are a way that we pass information from generation to generation, or just between peers on the factory floor. Dynamic quality is the creativity that drives change. Together they form a ratchet-like mechanism: dynamic quality moves us from point A to point B, and static quality filters out the B points, throwing out the ones that fall below the baseline.

I’ve heard more than one person say that we need to get everyone doing things the same way, and they use this as an argument in favour of best practices. I think that’s wrong. We have baseline practices to facilitate knowledge sharing. They get new employees up to speed fast. They allow one person to go on vacation while another person fills in for them. They are the safety net. But we always need to encourage people go beyond the baseline. It needs to be stated explicitly: “we know there are better ways of doing this, and it’s your job to figure out what those ways are.”

Estimating Software Projects

Why is it so hard to estimate software (and control system) projects?

Does this seem familiar? You start out as a grunt in the trenches designing stuff, writing code, and you keep getting a string of projects to work on with insanely low budgets and short timelines, and you think, “I can do better than this.” You decide you want to manage your own projects, and your boss thinks that’s great because (a) he trusts you, and (b) that’s one less thing for him to do. He starts you off with some small project…

The first thing he asks you for is an estimate. You listen to the scope, give it about 10 minutes of thought, write up an estimate and hand it to him:

  • Design: 3 days
  • Programming: 2 weeks
  • Testing: 2 days

So, three weeks. “Let’s just say four weeks,” says your boss. Great!

You start out on your design. It takes about 3 days. Then you start programming, and you get it all working, and you’re on budget. Excellent! Your boss asks you how it’s going, and you say, “I’m on budget! I just have some testing left to do.” Great, let’s have a review with the customer.

After the review with the customer, it’s clear that half of what you did isn’t going to meet their expectations, and the other half is going to get all messed around because of the first half changing. Damn!

“How is this going to affect the budget?” says your boss…

“Well, since we have four weeks, and we’re only two and a half weeks in, we should be ok. I’ll probably have to come in on a weekend.”

So you work your butt off, and the hours are really racking up, but you think you’re going to make it, and all of a sudden, BAM! You have this one annoying problem that you just can’t seem to fix. Maybe it’s a bug in a Windows driver, or a grounding issue, or whatever. It’s something hard, and you burn a whole week trying various solutions trying to solve it. Suddenly you’re overbudget.

After this experience, and a few more like it, you start to get the hint that maybe it’s you. Then you learn there’s an entire industry out there that functions as a support group for people like you: Project Management. You get to go to little two-day project management seminars where they commiserate with you. An instructor gets up and talks about all the common pitfalls of managing a project, and everyone with any experience trying to do it says, “yeah, exactly!”

Then they tell you the solution to this is to impose the tried and true “Project Management Methodology” on it:

  • Initiate
  • Plan
  • Execute
  • Monitor
  • Complete

This gives you a warm fuzzy feeling. The feeling that you’re back in control of your world, and if you’re into control systems like me, then you’re all about controlling your world.

Now the first step includes the estimation step. You do a work breakdown structure, down to small details, and you assign a best case, probable case, and worst case estimates to each task, and you usually use some pseudo-statistics to come up with a weighted sum of the estimates for all tasks.

This process has worked well, but only in certain industries. If you’re building a house, it works well. If you’re building a skyscraper or a bridge, it works well, particularly if the result is similar to something you’ve done before.

It’s odd that most Project Management books use the “send a man to the moon” example. While that project was completed on time (before the end of the decade), and it achieved the performance goal, it went wayyy over budget. If this was a corporate project, it would be a complete failure. It was only considered a success because it was a government job.

If you’re trying to estimate how long it will take to build a house, the estimate starts with a completed design. Then the Project Manager calls up the framers, roofers, drywallers, electricians, plumbers, etc., and gives them a copy of the blueprints. The plumber has a really good idea, based on the number of fixtures, etc., how much it’s going to cost and how long it will take. He does this all the time.

Software is a different beast. It’s like asking you to estimate the architect’s time when he drew up the blueprints of the house. The construction of the house is analogous to the compiling step in software development. In software, we’ve completely automated that step. We can go from design to prototype in a matter of seconds. What we’re estimating in software development is the design effort, not the construction effort. Unfortunately, before the design is done, we haven’t even figured out what the problems are yet, let alone the solutions.

Computer programming is almost always novel. Repetition is the ultimate sin. If you find repetition, you refactor it out into a re-usable library or module. Each project brings new platforms and technologies, ever higher levels of abstraction, and a novel problem, so we have no baseline to estimate from.

In fact, the easier it is to estimate a project, the less important that project really is. If your job is easy to estimate, it’s probably repetitive enough that it’ll soon be automated away.

Finally Getting an Arduino

I cruised through January in a kind of sleep deprived stupor (we just had our second child this December). Things are finally swinging back to normal, and I’m getting my geek back on.

I’ve been looking for a less expensive way to do discrete (or analog) I/O over WiFi for homebrew projects. I want something that can be compatible with SoapBox Snap (the open source ladder logic editor/runtime I’ve been working on), so I’ll need to be able to write a communication driver in C#. I’ve been rather frustrated by the options:

  • National Control Devices WiFi Relay Boards – but they start at $250 and go up from there.
  • Insteon – perfect for home automation and more reliable than X10, but their software license agreement for their development kit is extremely unfriendly to open source.
  • Belkin and other manufacturers have created some wireless USB hubs, but I can’t seem to find anyone who has them in stock, most have been discontinued, and the cost is still prohibitive ($100+) especially when you figure you still have to buy some Phidgets boards on top of that.

Then I finally decided this was my best choice: the YellowJacket WiFi Arduino. Arduino is a family of open-source hardware designs for microcontroller boards. You can buy add-on cards for them (called shields), but you can also purchase custom designs with specific features, like this one with built-in 802.11b WiFi.

The price is right ($55), it’s very open source friendly, and since I get to program the microcontroller end, I should have no problem writing a driver for it in C#. Unfortunately it’s on back order, but I expect to get it in a couple of weeks. I’ll post more after I’ve played with it a bit.

I must admit there’s one other honourable mention, though it was a bit too much of a hack for me. There are these cheap routers you can get called La Fonera (made by FON). It turns out that they have 3 or 4 unused general purpose TTL level I/O on the circuit board, and if you install a copy of the open source DD-WRT firmware on it, it lets you control those I/O using the command line prompt if you telnet or SSH into the router. Perfect, and you can pick these things up really cheap on eBay. Unfortunately I wanted something just a little more off-the-shelf than that. (I do have a copy of DD-WRT running on my old Linksys router and I’m using it as a wireless bridge to extend the range of the wireless in my house, so I did give it some serious consideration.)

Control System Security and 2010

Looking back at the year 2010, there was one really interesting and important happening in the world of industrial control system security: Stuxnet.

There’s a lot of speculation about this computer worm, but let’s just look at the facts:

  • It required substantially more resources to create than a typical computer worm (some estimates put it around $1,000,000, if you figure 5 person-years and the cost to employ specialized programmers)
  • It targets Siemens WinCC software, so that it can infect Step 7 PLCs
  • It looks like it was specifically targeted at a single facility (based on the fact that it was targeting a specific PLC, and only specific brands of VFDs)
  • It was designed to do real physical damage to equipment
  • It was designed to propagate via USB memory sticks to make it more likely to spread inside industrial settings, and even updated itself in a peer-to-peer manner (new versions brought in from the outside could update copies already inside a secure network)

If your average computer worm is the weapon-equivalent a hatchet, Stuxnet is a sniper rifle. There is speculation that the intended target was either the Bushehr Nuclear Power Plant or the Natanz Nuclear Facility, both in Iran, but what is known is that it has spread far and wide in other industrial networks. It’s been called the world’s first cyber super weapon.

What’s clear is that our industry’s relative ignorance when it comes to computer security has to end. Stuxnet proved the worst case, which is that a proprietary embedded PLC can successfully be the target of a computer worm.

I’ve been watching as more and more old-school vendors include Windows CE based devices as HMI components in their systems (like the PanelView Plus from Allen-Bradley). These are susceptible to the same kinds of threats that can infect Microsoft-based smartphones, and it takes a lot less than $1,000,000 to write one of those. It’s the kind some kid can put together in his basement for fun.

I’ve also seen (and even been pushing) a trend towards PC-based control. It’s nothing new, and I’ve seen PC-based control solutions out there for almost 10 years now, but it’s the networking that makes them vulnerable. In one facility about five years ago, I saw a PC-based control system completely taken down by a regular old computer worm. There were two mitigating causes in that case… first, the control system was on the same network as the main office network (the virus was brought in by an employee’s laptop that they connected at home), and secondly the vendor of the control software prohibited the customer from installing anti-virus software on the control system PC because they said it would void the warranty. I rarely see these same mistakes in new installations, but it does happen.

A couple of years ago I found a computer virus on an industrial PC with a VB6 application used as an HMI for a PLC/2. The PC in question was never connected to any network! The virus found its way to this computer over floppy disks and USB memory sticks.

Now if your facility is juicy enough that someone would spend $1,000,000 to take a shot at it, then you need specialized help. Stuxnet is a boon to security consultants because now they have a dramatic story to wave in the face of clients. On the other hand, most facilities need some kind of basic security measures.

  • Separate your industrial and office networks (if you need to move data from one to the other, then have a secure machine that can sit on both networks)
  • Make sure all machines automatically update their Windows Updates and their anti-virus definitions automatically, even if they’re on the industrial network
  • Change the default passwords on all devices and servers (including SQL Server’s SA password!)
  • Use different technologies in different layers (does your office network use Cisco managed switches? Consider using industrial managed switches from a different vendor in your industrial network)

Are you an integrator looking to expand your lines of business? Hire a computer security consultant and have them go knocking on the doors of your biggest customers with the Stuxnet story. You should be able to sell them a security assessment, and an action plan. Given the current security landscape, you’ll be doing them a big favour.

Intro to Mercurial (Hg) Version Control for the Automation Professional

There’s a tool in the PC programming world that nobody would live without, but almost nobody in the PLC or Automation World uses: version control systems. Even most PC programmers shun version control at first, until someone demonstrates it for them, and then they’re hooked. Version control is great for team collaboration, but most individual programmers working on a project use it just for themselves. It’s that good. I decided since most of you are in the automation industry, I’d give you a brief introduction to my favourite version control system: Mercurial.

It’s called “Mercurial” and always pronounced that way, but it’s frequently referred to by the acronym “Hg”, referring to the element “Mercury” in the periodic table.

Mercurial has a lot of advantages:

  • It’s free. So are all of the best version control systems actually. Did I mention it’s the favourite tool of PC programmers? Did I mention PC programmers don’t like to pay for stuff? They write their own tools and release them online as open source.
  • It’s distributed. There are currently two “distributed version control systems”, or DVCS, vying for supremacy: Hg and Git. Distributed means you don’t need a connection to the server all the time, so you can work offline. This is great if you work from a laptop with limited connectivity, which is why I think it’s perfect for automation professionals. Both Hg and Git are great. The best comparison I’ve heard is that Hg is like James Bond and Git is like MacGyver. I’ll let you interpret that…
  • It has great tools. By default it runs from the command line, but you never have to do that. I’ll show you TortoiseHg, a popular Windows shell that lets you manage your versioned files inside of a normal Windows Explorer window. Hg also sports integration into popular IDEs.

I’ll assume you’ve downloaded TortoiseHg and installed it. It’s small and also free. In fact, it comes bundled with Mercurial, so you only have to install one program.

Once that’s done, follow me…

First, I created a new Folder on my desktop called My New Repository:

Now, right click on the folder. You’ll notice that TortoiseHg has added a new submenu to your context menu:

In the new TortoiseHg submenu, click on “Create Repository Here”. What’s a Repository? It’s just Hg nomenclature for a folder on your computer (or on a network drive) that’s version controlled. When you try to create a new repository, it asks you this question:

Just click the Create button. That does two things. It creates a sub-folder in the folder you right-clicked on called “.hg”. It also creates a “.hgignore” file. Ignore the .hgignore file for now. You can use it to specify certain patterns of files that you don’t want to track. You don’t need it to start with.

The .hg folder is where Mercurial stores all of its version history (including the version history of all files in this repository, including all files in all subdirectories). This is a particularly nice feature about Mercurial… if you want to “un-version” a directory, you just delete the .hg folder. If you want to make a copy of a repository, you just copy the entire repository folder, including the .hg subdirectory. That means you’ll get the entire history of all the files. You can zip it up and send it to a friend.

Now here’s the cool part… after you send it to your friend, he can make a change to the files while you make changes to your copy, you can both commit your changes to your local copies, and you can merge the changes together later. (The merge is very smart, and actually does a line-by-line merge in the case of text files, CSV files, etc., which works really well for PC programming. Unfortunately if your files use a proprietary binary format, like Excel or a PLC program, Mercurial can’t merge them, but will at least track versions for you. If the vendor provides a proprietary merge tool, you can configure Mercurial to open that tool to merge the two files.)

Let’s try an example. I want to start designing a new automation cell, so I’ll just sketch out some rough ideas in Notepad:

Line 1
  - Conveyor
    - Zone 1
    - Zone 2
    - Zone 3
  - Robot
    - Interlocks
    - EOAT
    - Vision System
  - Nut Feeder

I save it as a file called “Line 1.txt” in my repository folder. At this point I’ve made changes to the files in my repository (by adding a file) but I haven’t “committed” those changes. A commit is like a light-weight restore point. You can always roll back to any commit point in the entire history of the repository, or roll forward to any commit point. (You can also “back out” a single commit even if it was several changes ago, which is a very cool feature.) It’s a good idea to commit often.

To commit using TortoiseHg, just right click anywhere in your repository window and click the “Hg Commit…” menu item in the context menu. You’ll see a screen like this:

It looks a bit weird, but it’s showing you all the changes you’ve made since your “starting point”. Since this is a brand new repository, your starting point is just an empty directory. Once you complete this commit, you’ll have created a new starting point, and it will track all changes after the commit. However, you can move your starting point back and forth to any commit point by using the Update command (which I’ll show you later).

The two files in the Commit dialog box show up with question marks beside them. That means it knows that it hasn’t seen these files before, but it’s not sure if you want to include them in the repository. In this case you want to include both (notice that the .hgignore file is also a versioned file in the repository… that’s just how it works). Right click on each one and click the Add item from the context menu. You’ll notice that it changes the question mark to an “A”. That means the file is being added to the repository during this commit.

In the box at the top, you have to enter some description of the change you’re making. In this case, I’ll say “Adding initial Line 1 layout”. Now click the Commit button in the upper left corner. That’s it, the file is now committed in the repository. Close the commit window.

Now go back to your repository window in Windows Explorer. You’ll notice that they now have green checkmark icons next to them (if you’re using Vista or Windows 7, sometimes you have to go into or out of the directory, come back in, and press F5 to see it update):

The green checkmark means the file is exactly the same as your starting point. Now let’s try editing it. I’ll open it and add a Zone 4 to the conveyor:

Line 1
  - Conveyor
    - Zone 1
    - Zone 2
    - Zone 3
    - Zone 4
  - Robot
    - Interlocks
    - EOAT
    - Vision System
  - Nut Feeder

The icon in Windows Explorer for my “Line 1.txt” file immediately changed from a green checkmark to a red exclamation point. This means it’s a tracked file and that file no longer matches the starting point:

Notice that it’s actually comparing the contents of the file, because if you go back into the file and remove the line for Zone 4, it will eventually change the icon back to a green checkmark!

Now that we’ve made a change, let’s commit that. Right click anywhere in the repository window again and click “Hg Commit…”:

Now it’s showing us that “Line 1.txt” has been Modified (see the M beside it) and it even shows us a snapshot of the change. The box in the bottom right corner shows us that we added a line for Zone 4, and even shows us a few lines before and after so we can see where we added it. This is enough information for Mercurial to track this change even if we applied this change in a different order than other subsequent changes. Let’s finish this commit, and then I’ll give you an example. Just enter a description of the change (“Added Zone 4 to Conveyor”) and commit it, then close the commit window.

Now right-click in the repository window and click on Hg Repository Explorer in the context menu:

This is showing us the entire history of this repository. I’ve highlighted the last commit, so it’s showing a list of files that were affected by that commit, and since I selected the file on the left, it shows the summary of the changes for that file on the right.

Now for some magic. We can go back to before the last commit. You do this by right-clicking on the bottom revision (that says “Adding initial Line 1 layout”) and selecting “Update…” from the context menu. You’ll get a confirmation popup, so just click the Update button on that. Now the bottom revision line is highlighted in the repository explorer meaning the old version has become your “starting point”. Now go and open the “Line 1.txt” file. You’ll notice that Zone 4 has been removed from the Conveyor (don’t worry if the icons don’t keep up on Vista or Win7, everything is working fine behind the scenes).

Let’s assume that after the first commit, I gave a copy of the repository to someone, (so they have a copy without Zone 4), and they made a change to the same file. Maybe they added some detail to the Nut Feeder section:

Line 1
  - Conveyor
    - Zone 1
    - Zone 2
    - Zone 3
  - Robot
    - Interlocks
    - EOAT
    - Vision System
  - Nut Feeder
    - 120VAC, 6A

Then they committed their change. Now, how do their changes make it back into your repository? That’s by using a feature called Synchronize. It’s pretty simple if you have a copy of both on your computer, or if each of you have a copy, and you also have a “master” copy of the repository on the server, and you can each see the copy on the server. What happens is they “Push” their changes to the server copy, and then you “Pull” their change over to your copy. (Too much detail for this blog post, so I’ll leave that to you as an easy homework assignment). What you’ll end up with, when you look at the repository explorer, is something like this:

You can clearly see that we have a branch. We both made our changes to the initial commit, so now we’ve forked it. This is OK. We just have to do a merge. In a distributed version control system, merges are normal and fast. (They’re fast because it does all the merge logic locally, which is faster than sending everything to a central server).

You can see that we’re still on the version that’s bolded (“Added Zone 4 to Conveory”). The newer version, on top, is the one with the Nut Feeder change from our friend. In order to merge that change with ours, just right click on their version and click “Merge With…” from the context menu. That will give you a pop-up. It should be telling you, in a long winded fashion, that you’re merging the “other” version into your “local” version. That’s what you always want. Click Merge. It will give you another box with the result of the merge, and in this case it was successful because there were no conflicts. Now click Commit. This actually creates a new version in your repository with both changes, and then updates your local copy to that merged version. Now take a look at the “Line 1.txt” file:

Line 1
  - Conveyor
    - Zone 1
    - Zone 2
    - Zone 3
    - Zone 4
  - Robot
    - Interlocks
    - EOAT
    - Vision System
  - Nut Feeder
    - 120VAC, 6A

It has both changes, cleanly merged into a single file. Cool, right!?

What’s the catch? Well, if the two changes are too close together, it opens a merge tool where it shows you the original version before either change, the file with one change applied, the file with the other change applied, and then a workspace at the bottom where you can choose what you want to do (apply one, the other, both, none, or custom edit it yourself). That can seem tedious, but it happens rarely if the people on your project are working on separate areas most of the time, and the answer of how to merge them is usually pretty obvious. Sometimes you actually have to pick up the phone and ask them what they were doing in that area. Since the alternative is one person overwriting someone else’s changes wholesale, this is clearly better.

Mercurial has a ton of other cool features. You can name your branches different names. For example, I keep a “Release” branch that’s very close to the production code where I can make “emergency” fixes and deploy them quickly, and then I have a “Development” branch where I do major changes that take time to stabilize. I continuously merge the Release branch into the Development branch during development, so that all bug fixes make it into the new major version, but the unstable code in the Development branch doesn’t interfere with the production code until it’s ready. I colour-code these Red and Blue respectively so you can easily see the difference in the repository explorer.

I use the .hgignore file to ignore my active configuration files (like settings.ini, for example). That means I can have my release and development branches in two different folders on my computer, and each one has a different configuration file (for connecting to different databases, or using different file folders for test data). Mercurial doesn’t try to add or merge them.

It even has the ability to do “Push” and “Pull” operations over HTTP, or email. It has a built-in HTTP server so you can turn your computer into an Hg server, and your team can Push or Pull updates to or from your repository.

I hope this is enough to whet your appetite. If you have questions, please email me. Also, you can check out this more in-depth tutorial, though it focuses on the command-line version: hginit.

Will TwinCAT 3 be Accepted by Automation Programmers?

Note that this is an old article and I now have more up-to-date TwinCAT 3 Reviews and now a TwinCAT 3 Tutorial.

In the world of programming there are a lot of PC programmers and comparatively few PLC programmers, but I inhabit a smaller niche. I’m a PLC and a PC programmer. This is a dangerous combination.

If you come from the world of PLC programming, like I did, then you start out writing PC programs that are pretty reliable, but they don’t scale well. I came from an electrical background and I adhered to the Big Design Up Front (BDUF) methodology. The cost of design changes late in the project are so expensive that BDUF is the economical model.

If you come from the world of PC programming, you probably eschew BDUF for the more popular “agile” and/or XP methodologies. If you follow agile principles, your goal is to get minimal working software in front of the customer as soon as possible, and as often as possible, and your keep doing this until you run out of budget. As yet there are no studies that prove Agile is more economical, but it’s generally accepted to be more sane. That’s because of the realization that the customer just doesn’t know what they want until they see what they don’t want.

It would be very difficult to apply agile principles to hardware design, and trying to apply BDUF (and the “waterfall” model) to software design caused the backlash that became Agile.

Being both a PLC and a PC programmer, I sometimes feel caught between these two worlds. People with electrical backgrounds tend to dislike the extra complexity that comes from the layers and layers of abstraction used in PC programming. Diving into a typical “line of business” application today means you’ll need to understand a dizzying array of abstract terminology like “Model”, “View”, “Domain”, “Presenter”, “Controller”, “Factory”, “Decorator”, “Strategy”, “Singleton”, “Repository”, or “Unit Of Work”. People from a PC programming background, however, tend to abhor the redundancy of PLC programs, not to mention the lack of good Separation of Concerns (and for that matter, source control, but I digress).

These two worlds exist separately, but for the same reason: programs are for communicating to other programmers as much as they’re for communicating to machines. The difference is that the reader, in the case of a PLC program, is likely to be someone with only an electrical background. Ladder diagram is the “lingua franca” of the electrical world. Both electricians and electrical engineers can understand it. This includes the guy who happens to be on the night shift at 2 am when your code stops working, and he can understand it well enough to get the machine running again, saving the company thousands of dollars per minute. On the other hand, PC programs are only ever read by other PC programmers.

I’m not really sure how unique my situation is. I’ve had two very different experiences working for two different Control System Integrators. At Patti Engineering, almost every technical employee had an electrical background but were also proficient in PLC, PC, and SQL Server database programming. On the other hand, at JMP Engineering, very few of us could do both, the rest specialized in one side or the other. In fact, I got the feeling that the pure PC programmers believed PLC programming was beneath them, and the people with the electrical backgrounds seemed to think PC programming was boring. As one of the few people who’ve tried both, I can assure you that both of these technical fields are fascinating and challenging. I also believe that innovation happens on the boundaries of well established disciplines, where two fields collide. If I’m right, then both my former employers are well positioned to cash in on the upcoming fusion of data and automation technologies.

TwinCAT 3

I’ve been watching Beckhoff for a while because they’re sitting on an interesting intersection point.

On the one side, they have a huge selection of reasonably priced I/O and drive hardware covering pretty much every fieldbus you’d ever want to connect to. All of their communication technologies are built around EtherCAT, an industrial fieldbus of their own invention that then became an open standard. EtherCAT, for those who haven’t seen it, has two amazing properties: it’s extremely fast, compared with any other fieldbus, and it’s inexpensive, both for the cabling and the chip each device needs to embed for connectivity. It’s faster, better, and cheaper. When that happens, it’s pretty clear the old technologies are going to be obsolete.

On the other side, they’re a PC-based controls company. Their PLC and motion controllers are real-time industrial controllers, but you can run them on commodity PC hardware. As long as PCs continue to become more powerful, Beckhoff’s hardware gets faster, and they get those massive performance boosts for free. Not only that, but they get all the benefits of running their PLC on the same machine as the HMI, or other PC-based services like a local database. As more and more automation cells need industrial PCs anyway, integrators who can deliver a solution that combines the various automation modules on a single industrial PC will be more competitive.

Next year Beckhoff is planning to release TwinCAT 3, a serious upgrade from their existing TwinCAT 2.11. The biggest news (next to support for multiple cores) is that the IDE (integrated development environment) is going to be built around Microsoft’s Visual Studio IDE. That’s a pretty big nod to the PC programmers… yes you can still write in all the IEC-61131-3 languages, like ladder, function block, etc., but you can also write code in C/C++ that gets compiled down and run in the real-time engine.

Though it hasn’t been hyped as much, I’m pretty excited that you can have a single project (technically it’s called a “solution”) that includes both automation programming, and programming in .NET languages like C# or VB.Net. While you can’t write real-time code in the .NET languages, you can communicate between the .NET and real-time parts of your system over the free ADS communication protocol that TwinCAT uses internally. That means your system can now take advantage of tons of functionality in the .NET framework, not to mention the huge amount of 3rd party libraries that can be pulled in. In fact, did you know that Visual Studio has a Code Generation Engine built in? It would be pretty cool to auto-generate automation code, like ladder logic, from templates. You’d get the readability of ladder logic without the tedious copy/paste/search/replace. (Plus, Visual Studio has integrated source control, but I digress…)

Will anyone take advantage?

With such a split between PC and PLC programmers, exactly who is Beckhoff targeting with TwinCAT 3? They’ve always been winners with the OEM market, where the extra learning curve can be offset by lower commodity hardware prices in the long term. I think TwinCAT 3 is going to be a huge win in the OEM market, but I really can’t say where it’s going to land as far as integrators are concerned. Similar to OEMs, I think it’s a good fit for integrators that are product focused because the potential for re-use pays for your ramp-up time quickly.

It’s definitely a good fit for my projects. I’m interested to see how it turns out.

Voting Machines Done Wrong are Dangerous

Talking about voting machines on this blog might seem a little off-topic, but I’m always fascinated by how automation is always interconnected with the people using it. That’s why I think voting machines are fascinating: because people are as much a part of the system as the technology.

I was interested to watch David Bismark’s recent TED talk on “E-voting without fraud”:

The method he’s describing seems to be the same as in this IEEE article.

Now I’m not an expert in the election process, but there are some fundamental things we all understand. One of those fundamental elements is called the “Secret Ballot“. Canada and the US have both had a secret ballot since the late 1800’s. When the concept is introduced in school, we’re shown a picture of how people “used” to cast their votes, which was to stand up in front of everyone at the polling station and call out your choice. Off to the side of the picture, we always saw a gang of people ready to rough up the people who voted for the “wrong” candidate. Therefore, most of us grow up thinking that freedom from retribution is the one and only reason for a secret ballot, so everyone thinks, “as long as nobody can learn who I voted for, then I’m safe.”

That’s really only half the reason for a secret ballot. The other half of the reason is to prevent vote-selling. In order to sell your vote, you have to prove who you voted for. With a secret ballot, you can swear up and down that you voted for Candidate A, but there really is no way for even you to prove who you voted for. That’s a pretty remarkable property of our elections. That’s the reason that lots of places won’t allow you to take a camera or camera-equipped cell phone into the voting booth with you. If the system is working correctly, you shouldn’t be able to prove who you voted for. That means you’re really free to vote for the candidate you really want to win.

I would also like to point out that vote-selling isn’t always straightforward. Spouses (of both genders) sometimes exert extreme pressure over their significant others, and some might insist on seeing proof of who the other voted for. Likewise, while employers could get in hot water, I could easily imagine a situation where proving to your boss that you voted the way he wanted ended up earning you a raise or a promotion over someone who didn’t. All of these pseudo-vote-selling practices always favour the societal group that has a lot of power at the moment, which is why it’s important for our freedoms to limit their influence.

That Means NO Voting Receipts

If you want to design a system that prevents vote-selling, you can’t allow the voter to leave the polling station with any evidence that can be used to prove who they voted for. (The system presented above allows you to leave with a receipt, but they claim it can’t be used to prove who you voted for.)

With this in mind, isn’t it amazing how well our voting system works right now? You mark your ballot in secret, then you fold up the paper, walk out from the booth in plain public view, and you put your single vote into the ballot box with everyone else’s. Once it’s in that box and that box is full of many votes, it’s practically impossible to determine who cast which vote, but if we enforce proper handling of the ballot box, we can all trust that all of the votes were counted.

We Want to Destroy Some Information and Keep Other Information

In order for the system to work correctly, we need to effectively destroy the link between voter and vote, but reliably hang on to the actual vote and make sure it gets counted.

Anyone who has done a lot of work with managing data in computers probably starts to get nervous at that point. In most computer systems, the only way we can really trust our data is to add things like redundancy and audit logs, all of it in separate systems. That means there’s a lot of copying going on, and it’s very easy to share the information that you’re trying to destroy. Once you’ve shared it, what if the other side mishandles it? Trust me, it’s a difficult problem. It’s even more complicated when you realize that even if the voting software was open source, you really can’t prove that a machine hasn’t been tampered with.

The method describe above offers a different approach:

  • With the receipt you get, you can prove that it is included in the “posted votes”
  • You can prove that the list of “tally votes” corresponds to the list of “posted votes” (so yours is in there somewhere)
  • You can’t determine which tally vote corresponds to which posted vote

ATMs and Voting Machines are Two Different Ballgames

One of the things you often hear from voting machine proponents, or just common people who haven’t thought about it much, is that we’ve been using “similar” machines for years that take care of our money (ATMs) and they can obviously be designed securely enough. Certainly if we have security that’s good enough for banks, it ought to be good enough for voting machines, right?

This is a very big fallacy. The only reason you trust an ATM is because every time there’s a bank transaction, it’s always between at least two parties, and each party keeps their own trail of evidence. When you deposit your paycheque into the ATM, you have a pay stub, plus the receipt that the ATM prints out that you can take home with you. On top of that, your employer has a record that they issued you that cheque, and there will be a corresponding record in their bank account statement showing that the money was deducted. If the ATM doesn’t do its job, there are lots of records elsewhere held by third parties that prove that it’s wrong. An ATM is a “black box”, but it has verifiable inputs and outputs.

The system above attempts to make the inputs and outputs of the voting system verifiable.

Another Workable E-voting System

The unfortunate thing about the proposed system, above, is that it’s rather complicated. If you read the PDF I linked to, you need a couple of Ph.D. dissertations under your belt before you can make it through. I don’t like to criticize without offering a workable alternative, so here goes:

Paper Ballots

If you want to make a secret ballot voting system that’s resistant to fraud, you absolutely need to record the information on a physical record. If you want to make it trustworthy, the storage medium needs to be human readable. Paper always has been, and continues to be, a great medium for storing human readable information in a trustworthy and secure way. There are ways to store data securely electronically, but at the moment it requires you to understand a lot of advanced mathematical concepts, so it’s better if we stick with a storage medium that everyone understands and trusts. In this system we will stick with paper ballots. They need to go into a box, in public view, and they need to be handled correctly.

Standardized Human and Machine Readable Ballots

Some standards organization needs to come up with an international standard for paper ballots. This standard needs to include both human and machine readable copies of the data. I suggest using some kind of 2D barcode technology to store the machine readable information in the upper right corner. Importantly: the human readable and machine readable portions should contain precisely the same information.

Please realize I’m not talking about standardized ballots that people then fill out with a pencil. I’m talking about paper ballots that are generated by a voting machine after the voter selects their choice using the machine. The voter gets to see their generated paper ballot and can verify the human readable portion of it before they put it into the ballot box.

Voting Machines vs. Vote Tallying Machines

Now that we have a standardized ballot, the election agencies are free to purchase machines from any vendor, as long as they comply with the standard. There will actually be two types of machines: voting machines that actually let the voter generate a ballot, and vote tallying machines that can process printed ballots quickly by using the machine readable information on each ballot.

One of the goals of e-voting is to be able to produce a preliminary result as soon as voting has completed. Nothing says that the Voting Machines can’t keep a tally of votes, and upload those preliminary results to a central station when the election is complete. However, the “real” votes are the ones on paper in the ballot boxes.

Shortly after the election, the ballot boxes need to be properly transported to a vote tallying facility where they can be counted using the vote tallying machines, to verify the result.

Checks and Balances

Part of the verification process should be to take a random sample of ballot boxes and count them manually, using the human readable information, and compare that with the results from the vote tallying machine. This must be a public process. If a discrepancy is found, you can easily determine if it was the voting machine or the vote tallying machine that was wrong. Assuming the ballots were visually inspected by the voters, then we can assume that the human readable portion is correct. If the machine readable information doesn’t match the human readable information, then the voting machine is fraudulent or tampered with. If the machine and human readable information match, then the vote tallying machine is fraudulent or tampered with.

If one company supplied both the voting machines and the vote tallying machines, then it would be a little bit easier to commit fraud, because if they both disagreed in the same way, it might not be caught. That’s why it’s important that the machines are sourced from different independent vendors.

No Silver Bullet

Notice that none of the current or proposed solutions are successfully resistant to someone taking some kind of recording equipment like a camera or a cell phone with camera into the voting booth with them. We still need some way to deal with this.