Applying new technology results in greater and greater specialization. It frees us from many mundane tasks, but also makes us more dependent on the performance of technologists. We depend more and more on experts in fields that impact our daily lives. We know less and less about the details of how our phones work, how our cars run, how we are charged for basic services, what drugs will work, what retirement plan is best for us and a whole host of other issues. We depend more and more on experts who are asked to give opinions and make decisions on our behalf. When one party depends on the expertise of another party the opportunity for moral hazard exists.
In many cases the expert attempts to avoid the moral hazard and takes a “buyer beware” approach. This is just a dodge and often results in disasters ranging from Solyndra to Bernie Madoff. Outrage on many levels have supercharged our culture to the point that here-to-for “crazies” like Donald Trump and Bernie Sanders are being considered as viable solutions to various perceived injustices. Even Hillary “Whitewater” Clinton feels obligated to chew on the hands that support her campaigns and pledge newer and tougher regulations.
Sadly, new laws and regulations will do little to affect outcomes related to technology related moral hazard. This landscape is too complex and variable to be adequately handled by regulations. The CFR would explode into an even more unmanageable and unenforceable mess.
The only way to make any sense of technological moral hazard is to encourage the development of widely held ethical standards – standards that are implemented by consensus and “enforced” by constant peer pressure. This runs counter to our malum prohibitum culture, but it is the only practical way to proceed.
I would like to propose five basic concepts that I think can serve as the basis for technology ethics. These can be thought of as tenants to be taught, virtues to be upheld and even details to be considered in Mission Statements, Employment Policies and vendor contracts. We might call them the Five Rings of Technology Ethics (c.f. Miyamoto Musashi):
The Five Rings of Technology Ethics:
- Is it True?
- How big is the Box?
- How long is the Timeline?
- What is the True Cost?
- What is Failure Like?
The first question is rather obvious. If the claims being made are not true, then whoever is making the claim is acting unethically. But it goes beyond that. Whoever knows what is being claimed is untrue and fails to speak up is acting unethically. It is not just of matter of “not lying.” It includes the positive prohibition against allowing known lies to be told. I don’t know that there is much more to say about that other than to note that this is not a simple thing to do.
The matter of “truth” can get complicated when we add to it the issue of uncertainty. In other words, we cannot deal with the matter of truth simply by reporting the “average” or “general” result. We have not spoken “truth” until we report our “certainty” about that “truth.”
The simplest example is from analytical measurements. If we report the concentration of a toxic chemical as being, on the average, at a very low level, but don’t report that we found some “hotspots” of acute toxicity, we are not conveying “truth.” Likewise, if we claim that a software program is “reliable” but don’t mention that our only source of information was from the undocumented claims by the company on their website, we have not reported anything that can be considered “true.”
“Truth” is one of the first casualties of our “instant access,” Internet culture. Unrelated, unsubstantiated, uncritical data points abound. It takes some effort and care to sort through that and find the “truth.” The expert is morally obligated to make that effort.
The second question is a little less obvious. Whether we are talking about the scope of a project, the boundary of a mass balance or the subjects of a survey, getting the proper context is crucial. Frequently inappropriate manipulation of the context results in irrelevant or even deceptive information. Again, this goes beyond just being clear about the context, the expert has an ethical responsibility to investigate the appropriateness of the context to the needs of the user. For example, a $3,000 server might work just fine for 5 person startup, but would be totally inadequate for a 50 person shop. If an IT consultant knew that current $3,000 server was on the verge of collapse, he/she would be obligated to speak up. Likewise, if the boundary being used for a mass balance calculation would likely give erroneous results and the Process Engineer knew it, it would be unethical to just do the calculation and report the result without a clear indication that the result was suspect.
The third question has some similarity to the second question, but it is related specifically to time. Results, predictions and estimates always have some level of time dependency. What is profitable today may not be unprofitable next year. What works well today could easily be obsolete in a few years and, in some cases, a few days. It is not ethical for an expert to not let a client know that current software will soon be unsupported, the standards supplied today will expire very soon, the quotes given will be invalid before the project can get started or that the project timeline proposed is laughably optimistic. Furthermore, it is not ethical to obfuscate the uncertainty around time projections – especially when the expert knows that the time factor is critical to the client.
The question of “true cost” (the fourth question) can lead to a wide variety of ethical dilemmas. A frequent problem is virtually a “strategy” for some experts. The old, “Bid it low and make it up on Change Orders,” approach may be passé in some circles, but frequently surfaces in new technologies where cost controls are often underdeveloped. The expert can even end up in something of a “conflict” of interest. Often the expert sells hours to the client. If the expert knows of ways to cut costs for a client by trading capital dollars for “billable hours,” it would be unethical not to speak up even though it might not be in the best short-term interest of the expert.
And finally, the expert should always consider and make known what the failure scenario looks like. This includes a legitimate attempt to gauge the probability and consequences of various failure modes. Nowhere is this more important than when Health and Safety are at stake. There is no excuse for known hazards to be ignored. They should be openly and honestly considered as soon as they are realized. Beyond that, the expert should take stock of many reasonable “what if’s” and discuss them openly with clients and stakeholders. Business is all about managing risks and when the expert hides, minimizes or even exaggerates risks the client can be significantly and perhaps irreparably harmed. The expert should be prepared to suggest backups, safeguards and emergency strategies. The expert should be able to give good input into the probabilities and costs of failures and impacts of mitigation strategies.
We could go on with examples and illustrations, but these give a reasonable description of what the Five Rings of Technology Ethics entail. The overarching principle is that the technology expert should put the client’s welfare on at least an equal footing as his/her own. No one can be expected to engage in a voluntary relationship to their detriment, but for experts to victimize the uninitiated is unethical. Furthermore, it is bad economics. I don’t mean that it is just bad for the specific monetary exchange, but rather it is bad of the economics of society.
Let me explain. If it should be necessary that every transaction be scrutinized by second and third party experts our economic interactions will begin to look like trying to get across the Gambian border. There will be a plethora of fees, fines and bribes. Whether we hire armies of attorney’s, inspectors or regulators, the transactional costs will explode while quality, reliability and efficiency plummets. We can neither inspect in nor regulate in the necessary controls to make a more technically specialized society work. The only viable option is to promulgate, promote, encourage, cajole and coerce general adherence to common values and practices.
This can start with admonitions to practicing technocrats. Trade organizations and major companies should promulgate and regularly reinforce codes of ethical conduct in specific, subject matter relevant details. But that is just barely a start. Customers and consumers must demand ethical behavior and enforce it by the way they spend. Customers and consumers should ferret out, ostracize and, where practical, refuse to pay the unethical practitioner. Customers and consumers should be cautious and diligent about who they use and recommend as an “expert.” It is only when we all come to expect and even demand ethical behavior that abuses and failures will begin to subside and become the unusual rather than the common.
Stites & Associates, LLC, is a group of technical professionals who work with clients to improve laboratory performance and evaluate and improve technology by applying good management judgment based on objective evidence and sound scientific thinking. For more information see: www.tek-dev.net.