[lug] software engineering
efm at tummy.com
Mon Nov 13 16:39:08 MST 2006
* On 2006-11-13 16:23 dio2002 at indra.com <dio2002 at indra.com> wrote:
> Engineers are humans, and humans are infallible. doesn't matter whether
I guess this is an example of the point you're trying to make. :)
You mean humans are fallible.
I've been reading a bit lately about "Crew Resource Management" which has
evolved into "Crew Risk Management" and "Medical Risk Management". Coming
out of the use of standard processes by Flight Crews to reduce errors,
specifically communication errors, and now being used by Medical providers,
it's a way of looking at the way that people can back each other up to
reduce errors. And that's because humans are fallible. And everytime we
switch context, either by being interrupted, or by handing off a problem to
someone else, we need to make sure we can restore all of the the context.
An example of this that we've all encountered is using the phonetic
alphabet to read back a word that needs to be entered exactly as typed over
the phone. "aBcd" = "lower case a as in alpha, upper case b as in bob,
lower case charley, lower case d as in delta".
Another example is using a memorized checklist or written down workflow to
make sure that you do it the same way each time, and can pass a task off to
someone else if you are interrupted. "I was at step 4, can you take over
Even small errors can be expensive to fix. If you don't get it right the
first time, you've at least doubled the cost to do it, often times much
more than doubled.
Has anyone else seen Crew Resource Management in an IT or Systems context?
Regards, tummy.com, ltd
Evelyn Mitchell Linux Consulting since 1995
efm at tummy.com Senior System and Network Administrators
More information about the LUG