I was at some trade show wandering around with a cynic who was pointing out that every booth advertised productivity improvements. How many improvements we’ve made to productivity, and this has gone on for years” he said,“By now, we must be able to make incredible things with no effort.” In the embedded world, productivity is, of course, not incredible – or at least not incredibly good – but this can change. Using Linux or BSD as platforms in embedded development is a huge step although the Legacy RTOS (LRTOS) vendors and the 16bitters are fighting all the way. Our customers who come from the LRTOS world are often amazed to discover that many complex problems (like: display a graphical window on a remote machine) are trivial on a modern POSIX operating system. And the productivity gains of hardware platform independence are also big: You mean I can start software development now on a PC or reference board and just run the same software on our hardware when (or if) our hardware is ready? You mean we don’t have to start six months late?
A standard data acquisition program on RTLinuxPro is so easy it’s hard to believe.
See the post on control loops
Synchronization is hard in real-time applications, but not as hard as people imagine. If you follow a few simple rules you can make it manageable.
- Never force priority and mutual exclusion to fight each other. You can’t mean “Task A is more important than TaskB” and “TaskB should be able to lock TaskA out of some data structure as long as it want” at the same time.
- Long critical sections are sure signals of bad design. Use a simpler data structure or a client/server architecture or something.
- Stick to two or three mechanisms. If semaphores and RT-Fifos don’t do the trick, then maybe you should simplify your design.
See my paper for more details.
From the always readable Joel on Software:
Attention, FogBugz competitors: a court has ruled that you are welcome to continue to advertise your products when people search for FogBugz on Google. I actually don’t think there’s anything wrong with this although it does show a certain lack of class, mm, don’t you think? You don’t see Wal*Mart advertising when you search for Tiffany.
Type “rtlinux” in Google. RTLinux is a registered trademark of FSMLabs. Actually, the number of companies on the right has declined over the last year.
One of our customers wrote to us today and said that he was impressed with how enthusiastic our engineers are about our software and customer applications. And why not? Real-time software is core technology for so many incredible applications.
Here’s Mr. Brunel in front of the Great Western and the Pratt and Whitney F135 engine being tested. RTCore BSD and RTLinux are being used in the most complex and brilliant engineering works of our time. Few of them are as big and bright as the F135 doing an afterburner test, but several are just as impressive.
Imagine designing an automobile with a motor that runs really well on a demonstration frame, with none of those heavy panels or safety devices, and only tested on a short flat track. Imagine an engineering team designing an automobile or truck deciding that since high torque is only needed rarely, they don’t have to worry about it. Imagine that team being surprised to find that as they add more to their prototype vehicle and start testing it on hills, it starts to fail more and more often. And imagine the management team who enthusiastically endorsed the use of a very low cost engine and similar tactics on project after project, complaining about the low margins and high cost of development in their industry where so many projects fail or need massive redesigns!
The natural tendency of an engineer is to assume that “security” is an engineering issue that reduces to a type of reliability. And the Common Criteria security document outlines
a solid engineering approach (written in astoundingly opaque bureaucratese) for assuring software is designed, developed, and tested to limit security failures. But “software security” means different things in other contexts.
Someone made serious
money from construction of the Maginot line.
- Ross Anderson’s famous Why Software Security is Hard paper explains that “security” is often concerned with avoiding liability or blame while evading actually paying the costs of engineering security. For many people “software security” means “some hoops we have to jump though to satisfy an auditor or evade responsibility”.
- GreenHills Software is using “security” as a method for trying to frighten people away from Linux. See my GrokLaw note for details.
- The Digital Rights Management (DRM) people mean “Make sure we get paid, no matter how dangerous or insecure this makes the system from the customer point of view” See
DRM and Security.
The lesson here is that “security” is more like “efficiency” than “reliability”. You’d be wise to find out “who gets the benefit from this meaning of security” before you sign up. Bruce Schneier has a great story.
The other week I visited the corporate headquarters of a large financial institution on Wall Street; let’s call them FinCorp. FinCorp had pretty elaborate building security. Everyone — employees and visitors — had to have their bags X-rayed.
Seemed silly to me, but I played along. There was a single guard watching the X-ray machine’s monitor, and a line of people putting their bags onto the machine. The people themselves weren’t searched at all. Even worse, no guard was watching the people. So when I walked with everyone else in line and just didn’t put my bag onto the machine, no one noticed.
It was all good fun, and I very much enjoyed describing this to FinCorp’s VP of Corporate Security. He explained to me that he got a $5 million rate reduction from his insurance company by installing that X-ray machine and having some dogs sniff around the building a couple of times a week.
I thought the building’s security was a waste of money. It was actually a source of corporate profit.
So next time you see an impressive security credential or a virtual X-ray machine with patriotic slogans all over it, please look for the actual motivation and use. You might be surprised. In fact, the Common Criteria approach to software security asks for just this type of caution. You are supposed to do a thorough threat evaluation so that you can identify what your security needs really are instead of being stampeded into using someone else’s definition. Unfortunately, the Common Criteria documents are in exactly the kind of prose that you’d expect to see resulting from the collaboration of government security and standards organizations. See this note for more.