Author Archives: Scott Whitlock

About Scott Whitlock

I'm Scott Whitlock, an "automation enthusiast". By day I'm a PLC and .NET programmer at ETBO Tool & Die Inc., a manufacturer.

Choose MVP over MVVM

When I first saw the Model-View-ViewModel pattern, I thought it was pretty cool. I actually wrote an entire framework and an application using the MVVM pattern. It works, and it gives you a nice separation of concerns between your Model and the rest of your application.

What’s never sat well with me is the amount of redundant and sometimes boilerplate code you have to write in your ViewModel. Assuming you have POCO objects in your domain model and that your domain model shouldn’t know anything about your ViewModel (it shouldn’t), then if you have a domain class called Customer with a Name property, chances are you’ll have a ViewModel class called Customer, with a property called Name, except that the ViewModel will implement INotifyPropertyChanged, etc. It works, but once you get down to coding, it’s a LOT of extra work.

There is an alternative out there called Model-View-Presenter. Most people claim that MVVM is a form of MVP, but once you look closely, that’s not the case (or at least, that’s now how people are using MVVM). In both MVP and MVVM architecture, the ViewModel/Presenter forms a separation between the View and Model. The difference is that in MVVM, the ViewModel works with Model objects explicitly, but in MVP both the View and the Model are abstracted services to the Presenter. Perhaps it’s clearer with an example:

class CustomerViewModel
{
    Customer wrappedCustomerModel;
    public CustomerViewModel(Customer customerToWrap)
    {
        wrappedCustomerModel = customerToWrap;
    }
    
    // Leaving out the INotifyPropertyChanged stuff
    public string Name 
    { get { return wrappedCustomerModel.Name; } }
}

class Presenter
{
    public Presenter(IView view, IModel model)
    {
        populateViewWithModelData(view, model);
        view.UserActionEvent +=
            new UserActionEventHandler((s, e) =>
            {
                model.ProcessAction(e.Action);
                populateViewWithModelData(view, model);
            });
    }
    private void populateViewWithModelData(
        IView view, IModel model)
    {
        // custom mapping logic here
    }
}

There’s at least one major benefit to the Presenter class over the ViewModel class: you can wrap the model.ProcessAction class in a try...catch block and catch all unhandled exceptions from the Model logic in one place, logging them, and notifying the user is a nice friendly way. In the ViewModel case, any property getter can throw an exception, which causes lots of problems in WPF, not the least of which is that it sometimes breaks the binding and no further updates get sent back and forth.

Now let’s look at the constructor of the Presenter again:

    public Presenter(IView view, IModel model)

Nothing says that the View the presenter is hooking into couldn’t be a ViewModel:

    public Presenter(IViewModel viewModel, IModel model)

If you do this, then the Presenter separates the ViewModel from the Model! Ok, does that sound like too much architecture? Why did we want a ViewModel in the first place? We wanted it because we wanted to make the GUI logic testable, and then use WPF’s binding mechanisms to do a really simple mapping of View (screen controls) to ViewModel (regular objects). You still get that advantage. You can create a ViewModel that implements INotifyPropertyChanged and fires off an event when one of its properties changes, but it can just be a dumb ViewModel. It becomes a “Model of the View”, which is what the ViewModel is supposed to be. Since the ViewModel then has no dependencies on the Model, you can easily instantiate mock ViewModel objects in Expression Blend and pump all the test data you want into them.

Doesn’t that mean we’ve shifted the problem from the ViewModel to the Presenter? The Presenter obviously has to know the mapping between the Model and the ViewModel. Does that mean it reads the Customer Name from the Model and writes it into the Customer Name property in the ViewModel? What have we gained?

What if the Presenter was smart? Let’s assume that IModel represents that state of some domain process the user is executing. Maybe it has a Save method, maybe an Abort method. Perhaps it has a property called CustomerAddress of type Address. Maybe it has a read-only property of type DiscountModel, an Enum. Even though we’re working against an abstract IModel interface which probably doesn’t include all of the concrete public properties and methods, we have the power of reflection to inspect the actual Model object.

What if the presenter actually generated a new AddressViewModel and populated it with data from the Model any time it saw a public property of type Address on the concrete Model object? What if it hooked up listener events, or passed in a callback to the AddressViewModel so it could be notified when the user modified the address, and it would write those changes back to the Model, then inspect the Model for changes and update the ViewModel with the results? What if when it saw an Enum property on the Model, it automatically generated a DropDownListViewModel? What if, when it sees a Save method, it generates a SaveViewModel that gets mapped to a button in the View?

Can we write a generic Presenter that can comprehend our Model and ViewModel objects? Can it even build the ViewModel for us, based on what the concrete Model object looks like, and perhaps based on some hints in a builder object that we pass in?

The answer to all these questions are “Yes.” We can use the Presenter to automate the generation of the ViewModel layer based on the look & feel of the domain model itself. I leave this as an exercise for the reader…

Overengineering

“Overengineering” is a word that gets thrown around a lot. It’s used in a negative connotation, but I have a hard time defining it.

It’s not the same as Premature Optimization. That’s when you add complexity in order to improve performance, at the expense of readability, but the payoff isn’t worth the cost.

If “to engineer” is synonymous with “to design”, then overengineering is spending too much time designing, and not enough time… what? Implementing?

Lets say you and I need to travel across the continent. You despise overengineering, so you set off on foot immediately, gaining a head start. I go and buy some rollerblades, and easily pass you before the end of the day. Seeing me whiz past, you head to the nearest sporting goods store and buy a ten-speed. You overtake me not long after breakfast on the second day. “Hmm,” I think. I don’t have much money, but I rollerblade on over to a junk yard and get an old beater car. It doesn’t run though. I do some trouble-shooting… the electrical system is fine, and we have spark, but we’re just not getting ignition. I might be able to fix it, and I might not. Should I go and buy a faster bike than yours, and try to catch up, or should I take my chance and see if I can fix this car? I’m pretty sure I can fix it, and if I can, I can easily win, but if I can’t, I’m giving up the lower-risk but lower probability chance of winning by getting a bike.

It’s this last type of choice that we’re faced with as engineers. You have a project with an 8-week timespan. We estimated that it will take 10 weeks at 50 hours per week using standard practices, so the project manager just wants everyone to work 60+ hour weeks using the same practices because from their point of view, that’s the “safe” bet. As an engineer, you might be able to spend 3 weeks building something with a 90% chance of making you 2 times more efficient at building this type of project: 3 weeks spent building the tool, and then it would only take 5 weeks to complete the project, so you’re done in 8 weeks. Not only that, but then you’ve got a tool you can re-use the next time.

If every time we had this choice, we chose to make the tool first, then on average we’ll end up much further ahead. Every time we succeed (90% of the time), we’ll greatly improve our capabilities. We’ll out-innovate the competition, with lower costs and faster time to market. However, a manager is much more likely not to build the tool because they can’t tolerate the risk. The larger the company, the worse this is, and the typical excuse leveled at the “tool” solution is that it’s “overengineering.”

Imagine we’re back in the cross-continent scenario, and I’ve decided to fix the car. Two days later I’ve got car parts all over the ground, and I haven’t moved an inch. Meanwhile, you’re a hundred miles away from me on your bike. Who’s winning the race? You can clearly point to your progress… it’s progress that anyone can clearly see. I, on the other hand, can only show someone a car that’s seemingly in worse shape than it started in, plus my inability to move over the last few days. The pressure starts to mount. It’s surprising how quickly people will start to criticize the solution they don’t understand. They’ll call me a fool for even attempting it, and applaud you on your straightforward approach.

Of course you realize that if I can get the car working, the game’s over. By the time you see me pass you, it’ll be too late to pull the same trick yourself. I’ll already be travelling fast enough that you can’t catch me. If there’s a 90% chance I can fix the car, I’d say that’s the logical choice.

So is fixing the car “overengineering”? If the race was from here to the corner, then yes, I’d say so. The effort needs to be matched to the payback. Even if the race were from here to the next town, it wouldn’t give you a payback. But what if we were going to race from here to the next town once every day for the rest of the year? Wouldn’t it make sense to spend the first week getting myself a car, and then win the next 51 weeks of races?

In business, we’re in it for the long haul. It makes sense to spend time making ourselves more efficient. Why, then, do so many companies have systems that encourage drastic short term decision making at the expense of long term efficiencies and profit? How do we allow for the reasonable risk of failure in order to collect the substantial reward of innovation?

You start by finding people who can see the inefficiencies — the ones who can see what could easily be improved, streamlined, and automated. Then you need to take those people out of the environment where every minute they’re being pushed for another inch of progress. Accept that failure is a possible outcome once in a while. Yes, there’s risk, but there are also rewards. One doesn’t come without the other.

The Controls Engineer

An unbelievable buzz at quarter past eight
disturbed my deep thoughts; It’s my phone on vibrate.

It crawls ‘cross the desk, two inches or more
If I leave it, I wonder, will it fall on the floor?

I answer it finally, it’s a privilege you see
to have this fine gilded leash fastened to me.

It turns out it’s Mike in the maintenance shack
He says they’ve been fighting the dispenser out back.

“No problem,” I say, “I’ll come have a look-see”
then closing my phone I gulp back my coffee.

What do I need? My laptop, for sure,
a null modem cable, three adapters, or four?

I’ve got TWO gender benders, that should be enough.
I used to have more; I keep losing this stuff.

I glance at my tool kit, haven’t used it since June-and
I won’t use it again since we got this union.

My battery gives me ’bout ten minutes power
I’ll take my adapter; driving back here’s an hour.

Then out to my car, on my way to Plant 2
they phone me again, three text messages too.

I’m over there fast but no parking in sight.
The overflow lot’s one block down on the right.

Up to the entrance and in through the door,
Remember to sign at the desk, nine-oh-four.

My old ID badge doesn’t work with this scanner
I wonder when she will be back from the can, or

should I just get someone else to come get me?
Mike doesn’t answer, I try Mark… how ’bout Jenny?

“Hi Jenny… never mind, the receptionist’s back.”
The door latch, it closes behind me, click-clack.

Out on the floor passing blue and white panels
Watch out for things painted caution-tape-yellow.

On the right is that cell with the new network NIC,
It didn’t work well with that 5/05 SLC.

To the left is the line I commissioned in May.
It’s sat idle so far; warranty’s up next Friday.

Two more aisles down this way, a left then a right.
Hey! Now I see the dispenser in sight.

“Good morning, Mike,” I said, “How can I help?”
Mike says, “Don’t worry mate, it was just a loose belt.”

When to use a Sealed Coil vs. a Latch/Unlatch?

I just realized something I didn’t learn until at least a year into programming PLCs, and thought it would be a great thing to share for newer ladder logic programmers: when should you use a sealed-in coil vs. a latch/unlatch?

On the surface of it, a latch/unlatch instruction is sometimes frowned upon by experienced programmers because it’s correlated with bad programming form: that is, modifying program state in more than one location in the program. If you have one memory bit that you’re latching and unlatching all over the place, it really hinders readability, and I pity the fool that has to troubleshoot that code. Of course, most PLCs let you use the same memory bit in a coil instruction as much as you want, and that’s equally bad form, so I don’t take too strict of a stance on this. If you are going to use latch/unlatch instructions, make sure you only use one of each (for a given memory bit), and keep them very close together (preferably on adjacent rungs, or even in different branches of the same rung). Don’t make the user scroll, or worse yet, do a cross reference.

As you can imagine, if you’re going to use a Latch/Unlatch instruction and keep them very close together, it’s trivial to convert that to a rung with a sealed in coil, so what, if anything is the difference? Why have two sets of instructions that do the same thing?

It turns out (depending on the PLC hardware you’re using) that they act differently. On Allen-Bradley hardware, at least, an OTE instruction (coil) will always be reset (cleared to off) during the pre-scan. The pre-scan happens any time you restart the program, which is most importantly after a loss of power. If you’re using a sealed in coil to remember you have a pallet present in a zone, you’ll be in for a big surprise when you cycle power. All your zones will be unblocked, and you could end up with a bunch of crashes! On the other hand, OTL and OTU instructions don’t do anything during a pre-scan, so the state remains the same as it was before the power was removed.

For that reason, a latch/unlatch is a great indication of long term program state. If you have to track physical state about the real world, use a latch/unlatch instruction.

On the other hand, a sealed-in coil is a great way to implement a motion command (e.g. “attempting to advance axis A”). In that case you want your motion command to reset if the power drops out.

I hope that clears it up a bit. I always tried to avoid all latch/unlatch instructions until I understood these concepts.


Narrowing the Problem Domain

One of the ongoing tasks in industrial automation is troubleshooting. It’s not glamorous, and it’s a quadrant one activity, but it’s necessary. Like all quadrant one activities, the goals is to get it done as fast as possible so you can get back to quadrant two.

Troubleshooting is a process of narrowing the problem domain. The problem domain is all the possible things that could be causing the problem. Let’s say you have a problem getting your computer on the network. The problem can be any one of these things:

  • Physical network cable
  • Networks switch(es)
  • Network card
  • Software driver
  • etc.

In order to troubleshoot as quickly as possible, you want to eliminate possibilities fast (or at least determine which ones are more likely and which are unlikely). If you don’t have much experience, your best bet is to figure where the middle point is, then isolate the two halves and determine which half seems to be working right and which isn’t. This is guaranteed to reduce the problem domain by 50% (assuming there’s only one failure…). So, in the network problem, the physical cable is kind of in the middle. If you unplug it from the back of the computer and plug it into your laptop, can the laptop get on the internet? If yes, the problem’s in your computer, otherwise, it’s upstream. Rinse and repeat.

As you start to gain experience, you start to get faster because you can start to assign relative probabilities of failure to each component. Maybe you’ve had a rash of bad network cards recently, so you might start by checking that.

In industrial automation, I’ve seen a pattern that pops up again and again that helps me narrow the problem domain, so I thought I’d share. Consider this scenario: someone comes to you with a problem: “the machine works fine for a long time, and then it starts throwing fault XYZ (motion timeout), and then after ten minutes of clearing faults, it’s working again.” These annoying intermittent problems can be a real pain, because it’s sometimes hard to reproduce the problem, and it’s hard to know if you’ve fixed it.

However, if you ask yourself one more question, you can easily narrow it down. “Is the sensor that detects the motion complete condition a discrete or analog sensor?” If it’s a discrete sensor, the chance that the problem is in the logic is almost nil. I know our first temptation is always to break out the laptop, and a lot of people have this unrealistic expectation that we can fix stuff like this with a few timers here or there, but that’s not going to help. If you have discrete logic that runs perfectly for a long time and then suddenly has problems, it’s unlikely there’s a problem in the logic. There’s a 99% certainty that it’s a physical problem. Start looking for physical abnormalities. Does the sensor sense material or a part? If yes, is the sensor position sensitive to normal fluctuations in the material specifications? Is the sensor affected by ambient light? Is the sensor mount loose? Is the air pressure marginal? Is the axis slowing down due to wear?

The old adage, “when all you have is a hammer, every problem is a nail”, is just as true when the only tool you have is a laptop. Don’t break out the laptop when all you need is a wrench.

Book Review: The Lights in the Tunnel

I was paging through the Amazon store on my Kindle when I came across a book that caught my eye: The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (Volume 1)

It’s not every day you come across a book about Automation, and for $6, I figured, “what the heck?”

The author, Martin Ford, is a Computer Engineer from California. To summarize, he’s basically saying the following:

  • Within 80 years, we will have created machines that will displace significantly more than half of the current workforce

This is a topic that interests me. Not only do I have a career in automation, but I’ve previously wondered about exactly the same question that Ford poses. What happens if we create machines advanced enough that a certain segment of the population will become permanently unemployed?

The title of the book comes from Ford’s “Tunnel Analogy”. He tries to model the economy as a tunnel of lights, with each light representing a person, its brightness indicating its wealth, and the tunnel is lined with other patches of light: businesses. The lights float around interacting with the businesses. Some businesses grow larger and stronger while others shrink and die off, but ultimately the brightness of the tunnel (the sum of the lights) appears to be increasing.

I found the analogy to be a bit odd myself. Actually, I wasn’t quite sure why an analogy was necessary. We’re all pretty familiar with how the free market works. If you don’t get it, I don’t think the tunnel analogy is going to help you. In fact, one excerpt from his description of the tunnel makes me wonder if Ford himself even “gets” the concept of how the economy works:

As we continue to watch the lights, we can now see that they are attracted to the various panels. We watch as thousands of lights steam toward a large automaker’s panels, softly make contact and then bounce back toward the center of the tunnel. As the lights touch the panel, we notice that they dim slightly while the panel itself pulses with new energy. New cars have been purchased, and a transfer of wealth has taken place.

That particular statement irked me during the rest of the book. That’s not a good illustration of a free market; that’s an illustration of a feudal system. In a free market, we take part in mutually beneficial transactions. The automaker has a surplus of cars and wants to exchange them for other goods that it values more, and the consumer needs a car and wants to exchange his/her goods (or promise of debt) in exchange for the car. When the transaction takes place, presumably the automaker has converted a car into something they wanted more than the car, and the consumer has converted monetary instruments into something they wanted more: a car. Both the automaker and the consumer should shine brighter as a result of the transaction.

Ford has confused money with wealth, and that’s pretty dangerous. As Paul Graham points out in his excellent essay on wealth:

Money Is Not Wealth

If you want to create wealth, it will help to understand what it is. Wealth is not the same thing as money. Wealth is as old as human history. Far older, in fact; ants have wealth. Money is a comparatively recent invention.

Wealth is the fundamental thing. Wealth is stuff we want: food, clothes, houses, cars, gadgets, travel to interesting places, and so on. You can have wealth without having money. If you had a magic machine that could on command make you a car or cook you dinner or do your laundry, or do anything else you wanted, you wouldn’t need money. Whereas if you were in the middle of Antarctica, where there is nothing to buy, it wouldn’t matter how much money you had.

There are actually two ways to create wealth. First, you can make it yourself (grow your own food, fix your house, paint a picture, etc.), or secondly you can trade something you value less for something you value more. In fact, most of us combine these two methods: we go to work and create something that someone else wants so we can trade it for stuff that we want (food, cars, houses, etc.).

Later in the book, Ford makes a distinction between labor-intensive and capital-intensive industries. He uses YouTube as an example of a capital-intensive business because they were purchased (by Google) for $1.65B and they don’t have very many employees. I can’t believe he’s using YouTube as an example of a capital-intensive industry. The new crop of online companies are extremely low-overhead endeavors. Facebook was started in a dorm room. Again, Ford seems to miss the fact that money is not equal to wealth. Google didn’t buy YouTube for the capital, they bought their audience. Google’s bread and butter is online advertising, so they purchased YouTube because users are worth more to Google than they are to the shareholders of YouTube that sold out. Wealth was created during the transaction because all parties feel they have something more than they had to start.

Back to Ford’s premise for a moment: is it possible that we could create machines advanced enough that the average person would have no place in a future economy? I don’t find it hard to believe that we could eventually create machines capable of doing most of the work that we do right now. We’ve certainly already created machines that do most of the work that the population did only decades ago. The question is, can we get to the point where the average person has no value to add?

Let’s continue Ford’s thought experiment for a moment. You and I and half the population is now out of work and nobody will hire us. Presumably the applicable constitutional elements are in place so we’re still “free”. What do we do? Well, I don’t know about you, but if I had no job and I was surrounded by a bunch of other people with no job, I’d be out foraging for food. When I found some, I’d probably trade a bit of it to someone who could sew that might patch up my shirt. If I had a bit of surplus, I’d probably plant a few extra seeds the next spring and get a decent harvest to get me through the next winter.

I’m not trying to be sarcastic here. I’m trying to point out the obvious flaw in the idea that a large percentage of the population couldn’t participate in the economy. If that were the case, the large part of the population would, out of necessity, form their own economy. In fact, if we’re still playing in Ford’s dreamland here, where technology is so advanced that machines can think and perhaps even nanotechnology is real, I’d probably hang around the local dump and forage for a bit of technology there. The treasures I’d find there would probably put me in more luxury than I currently have in 2010.

So, if you take the thought experiment to the extreme, it breaks down. Given a free society divided into haves and have-nots, where the haves don’t have any use for the have-nots, then what you really have is two separate and distinct societies, each with its own bustling economy. Whether or not there is trade between those two economies, one thing is certain: almost everyone still has a job.

Of course, it’s not like we’re going to wake up tomorrow and technology will suddenly throw us all out of our jobs. The shift in technology will happen gradually over time. As technology improves, people will need to adapt (as we do every day). As I’ve said before, I think a major shift away from the mass consumption of identical items is already underway. As the supply of generic goods goes up, our perceived value of them goes down.

Ford doesn’t seem to participate in automation on a daily basis, so I think he lacks the experience of what automation really does. Automation drives down the cost, but it also increases the supply and reduces the novelty at the same time. Automated manufacturing makes products less valuable but the juxtaposition makes people more valuable.

There’s a company out there called Best Made Co. that sells $200 hand-made axes. There’s a three week waiting time. That’s a feature actually: it’s so valuable to people that there’s a three week lead time. It’s made by hand. It’s made by hand by people who are passionate about axes. Feature? I think so.

In Ford’s dystopia, when the robber-barons are sitting atop their mountains of widgets that they’ve produced in their lights-out factory, don’t you think one of them might want to buy a sincere story? Wouldn’t they be interested in seeing a movie, or going to church on Sunday, or reading a book? When all of your basic needs are met, these higher-level needs will all see more demand. They’re also hard to automate. Some things have value because they’re done by people. Some things would be worth less if you did automate them:

  • Relationships (with real people)
  • Religion
  • Sports
  • The Arts
  • “Home Cooked” or “Hand-Made”
  • Stories (of origins, extremes, rescues, journeys, relationships, redemption, and the future)

Do you recognize that list? That’s the list of things we do when we’re finished with the drudgery of providing for our survival. We cheer for our sports team on a Sunday afternoon, or go and see an emotional movie on Friday night. Some people buy $200 axes (or iPhones, anyone?) because they come with a fascinating story that they can re-tell to their friends. (Bonus points if it’ll get you laid.)

Ford scoffs at the idea of a transition to a service based economy. He suggests implementing heavy taxes on industry and redistributing that to people who otherwise would have been doing the job the robots are doing, just so they can buy the stuff the robots are producing. He can’t see anything but an economy based on the consumption of material goods. I say: go ahead and automate away the drudgery of daily existence, make the necessities of life so cheap they’re practically free, and let’s get on with building real wealth: strong relationships, a sense of purpose, and a society that values life-long self improvement (instead of life-long accumulation of crap). By making the unimportant stuff less valuable, automation is what will free us to focus more on what’s important.

Powering on a TRS-80: a Blast from the Past

Last night my 2 year old daughter came across a TRS-80 in our basement that my grandfather had given me, and she started banging away on the keys. I decided to plug it in and switch it on for her so she could get some feedback. I’d never turned it on before… but it worked! Well, kind of…

First it came up with a message that just said:

Cass?

I figured that must mean “Cassette” and there was none, so I just pressed N and hit enter. Then it said:

Memory Size?

The front panel said 48K, so I dutifully typed 48 and pressed enter. It didn’t seem to like the answer, or indeed anything else I tried to type in. When I got the computer there was a pile of books with it, and this one was on the top:

Helpfully, the first thing it tells you to do is to just hit enter when it asks you that question. I did, and after a couple thoughtful seconds, it gave me the good old “ready” prompt:

READY
>

The book suggesting typing CLS to clear the screen. I did this, but the L key didn’t work. I tried again. Nope…

Ok, so now it was a challenge. Isn’t it interesting that this old boat anchor could pass the time calmly in my basement when I thought there was nothing wrong with it, but the moment it needs repair, it becomes the focus of attention. I brought it up to the dining room table and started taking it apart (interestingly, my wife came home after it was sitting there disassembled, and it didn’t even phase her… apparently having the guts of a computer sprawled over the dining room table no longer provokes even the slightest surprise).

Amazingly, the “warranty void if removed” sticker was still intact over one of the screws. It felt good to void the warranty on an almost 30 year old computer. Once I got it apart, it wasn’t too hard to see what the problem was:

In case you didn’t quite catch that, here’s a close-up:

Yes, somewhere in this computer’s previous life, an industrious little rodent managed to build a nest under the lower floppy drive bay, and thoughtfully chewed through a few of the wires in the ribbon cable connecting the keyboard to the main board. After vacuuming it out I started searching for a suitable replacement. I took an unused IDE ribbon cable from my Windows Home Server. The TRS-80 ribbon cable had 20 pins, and the IDE cable had many more, but about 5 of the wires were “twisted” on the IDE cable. I had to carefully take it apart, untwist the wires, and put it back together. Then I was able to replace the keyboard cable with the new one, powered it up, and it worked!

READY
> 10 PRINT "I'M ALIVE!"
> 20 GOTO 10
> RUN
I'M ALIVE!
I'M ALIVE!
I'M ALIVE!
...

I hunted through the other documentation that came with it, and thought I’d share the nostalgia:

The one on the bottom left is particularly interesting. It’s literally a catalog the size of a Sears catalog, filled cover to cover with programs you could buy via mail order (anywhere from a couple of dollars to many thousands). It’s indexed by author, by industry, by application type, etc.

I never had a TRS-80 before, but I had an Atari 800XL, which was also programmed in BASIC (old school, with line numbers). It brought back many memories. I hope I could provide a moment of nostalgia for some of you too.

Now to figure out how to hook this up to something… 🙂

Good Function Blocks, Bad Function Blocks

In case you’ve never read my blog before, let me bring you up to speed:

  • Write readable PLC logic.

Now, I’m a fan of ladder logic, because when you write it well, it’s readable by someone who isn’t a programmer, and (in North America, anyway) maintenance people frequently have to troubleshoot automation programs and most of them are not programmers.

That doesn’t mean I’m not a fan of other automation languages. I think structured text should be used when you’re parsing strings, and I like to use sequential function chart to describe my auto-mode logic. I’m also a fan of function block diagram (FBD), particularly when working with signal processing logic, like PID loops, etc.

What I’m not a fan of is hard-to-understand logic. Here’s FBD used wisely:

Here’s an example of FBD abuse:

I’m still reading Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin. He’s talking about traditional PC programming, but one of the “rules” he likes to use is that functions shouldn’t have many inputs. Ideally 0 inputs, maybe 1 or 2, possibly 3, but never more than 3. He says if you go over 3, you’re just being lazy. You should just break that up into multiple functions.

I think that applies equally well to FBD. The reader can easily rationalize about the first image, above, but the second one is just a black box with far too many inputs and outputs. If it doesn’t work the way you expect (and it’s doubtful it does), you have to keep going inside of it to figure it out. Unfortunately once you’re inside, all the variable names change, etc.

I understand the necessity of code re-use, but not code abuse. If you find yourself in the situation of example #2, ask yourself how you can refactor it into something more readable. After all, the most likely person who has to read this later is you.

SoapBox Snap is Finally Released

At long last, I’ve finally released a first version of SoapBox Snap, a free and open source ladder logic editor and runtime for your PC:

SoapBox Snap

After all this time, it’s finally ready? Yes. I’m admittedly a perfectionist, and yet I admit this software isn’t perfect. Software never is, really. But in the world of open source software, the question is, “is it useful?” I believe SoapBox Snap is already useful, even in this early form. You can write logic that controls your outputs, which is the entire point of this software. More features will come over time.

One really cool feature, that I’m glad made it into the first release, is a driver for Twitter. You can easily configure SoapBox Snap to watch your own Twitter feed for status updates. The driver exposes a “Last Status” signal that has the text of your last status and a signal that pulses any time your status changes. You can compare the status text to certain key phrases, like “turn on light”, and use it to drive outputs. I think it would be neat to tie it to your garage door opener, and if you forget your keys you can just pull out your phone and tweet some key phrase like, “darn it I forgot my keys!” and the garage door could open!

Please check it out, download it, and play with it. If you have any neat ideas of things you could do, please drop me a line. Have fun!

Clean Ladder Logic

I’ve recently been reading Clean Code: A Handbook of Agile Software Craftsmanship. It’s written by Robert C. “Uncle Bob” Martin of Agile software (among other) fame. The profession of computer programming sometimes struggles to be taken seriously as a profession, but programmers like Martin are true professionals. They’re dedicated to improving their craft and sharing their knowledge with others.

The book is all about traditional PC programming, but I always wonder how these same concepts could apply to my other obsession, ladder logic. I’m the first to admit that you don’t write ladder logic the same way you write PC programs. Still, the concepts always stem from a desire for Readability.

Martin takes many hard-lined opinions about programming, but I think he’d be the first to admit that his opinions are made to fit the tools of the time, and those same hard-and-fast rules are meant to be bent as technology marches on. For instance, while he admits that maintaining a change log at the top of every source file might have made sense “in the 60’s”, the rise of powerful source control systems makes this obsolete. The source control system will remember every change that was made, who made it, and when. Similarly, he advocates short functions, long descriptive names, and suggests frequently changing the names of things to fit since modern development environments make it so easy to rename and refactor your code.

My favorite gem is when Martin boldly states that code comments, while sometimes necessary, are actually a failure to express ourselves adequately in code. Sometimes this is a lack of expressiveness in the language, but more often laziness (or pressure to cut corners) is the culprit.

What would ladder logic look like if it was “clean”? I’ve been visiting this question during the development of SoapBox Snap. For instance, I think manually managing memory, tags, or symbols is a relic of older under-powered PLC technology. When you drop a coil on the page in SoapBox Snap, you don’t have to define a tag. The coil is the signal. Not only is it easier to write, it prevents one of the most common cardinal sins of beginner ladder logic programming: using a bit address in two coil instructions.

Likewise, SoapBox Snap places few if any restrictions on what you can name your coils. You don’t have to call it MTR1_Start – just call it Motor 1: Start. Neither do you need to explicitly manage the scope of your signals. SoapBox Snap knows where they are. If you drop a contact on a page and reference a coil on the same page, it just shows the name of the coil, but if you reference a contact on another page, it shows the “full name” of the other coil, including the folders and page names of your organization structure to find it. Non-local signals are obviously not local, but you still don’t have to go through any extraneous mapping procedure to hook them up.

While we’re on the topic of mapping, if you’ve read my RSLogix 5000 Tutorial then you know I spend a lot of time talking about mapping your inputs and your outputs. This is because RSLogix 5000 I/O isn’t synchronous. I think it’s pointless to make the programmer worry about such pointless details, so SoapBox Snap uses a synchronous I/O scan, just like the old days. It scans the inputs, it solves the logic, and then it scans the outputs. Your inputs won’t change in the middle of the logic scan. To me, fewer surprises is clean.

I’ve gone a long way to make sure there are fewer surprises for someone reading a ladder logic program in SoapBox Snap. In some ladder logic systems, the runtime only executes one logic file, and that logic file has to “call” the other files. If you wanted to write a readable program, you generally wanted all of your logic files to execute in the same order that they were listed in the program. Unfortunately on a platform like RSLogix 5000, the editor sorts them alphabetically, and to add insult to injury, it won’t let you start a routine name with a number, so you usually end up with routine names like A01_Main, A02_HMI, etc. If someone forgets to call a routine or changes the order that they execute in the main routine, unexpected problems can surface. SoapBox Snap doesn’t have a “jump to page” or “jump to routine” instruction. It executes all logic in the order it appears in your application and each routine is executed exactly once per scan. You can name the logic pages anything you want, including using spaces, and you can re-order them with a simple drag & drop.

Program organization plays a big role in readability, so SoapBox Snap lets you organize your logic pages into a hierarchy of folders, and it doesn’t limit the depth of this folder structure. Folders can contain folders, and so on. Folder names are also lenient. You can use spaces or special characters.

SoapBox Snap is really a place to try out some of these ideas. It’s open source. I really hope some of these innovative features find their way into industrial automation platforms too. Just think how much faster you could find your way around a new program if you knew there were no duplicated coil addresses, all the logic was always being executed, and it’s always being executed in the order shown in the tree on the left. The productivity improvements are tangible.