Algorithms of one kind or another will be used in most applications. How many times have you heard talk of an unpleasant result being caused by the computer, with the implication that the computer program cannot err?
One example in Ontario that comes to mind is the way municipal real estate assessments are calculated, which seems to have no logical basis. In this case, staff are no doubt given instructions on how to input data each year. I suggest it is done blindly with no concept of how it affects the internal algorithms used in the assessment process.
When were those formulas created? How often, if ever, have they been updated? Are they still valid? Are the assessment body managers willing to release the algorithms they are using, so that knowledgeable people can be satisfied that they are fair, unbiased and scientifically correct. Do you, as a programmer, have any responsibility to warn of foolish results?
Many years ago, an algorithmic process was introduced to the oil industry’s refinery operations called ‘linear programming’. It enabled the optimization of needed refinery outputs from a given set of inputs of crude oil. Imperial Oil was one of the early users.
At first, input of data was only done by the scientists at head office, which resulted in morale problems with the refinery managers.
This was an undesirable effect, and the scientists had other research to conduct, so refinery managers were given an in-depth orientation and became directly involved with data input. While not perhaps understanding the mathematics, they now knew the impact that their data input decisions would make on refinery operations.
Unfortunately, with many information systems today, data is input by rote, those handling the data having little or no knowledge of the consequences of their actions should they make an error in input.
Particularly dangerous systems are those involving personal and business credit histories, where mistakes can have potentially devastating social or economic consequences. Does your program pick up such problems? I am equally concerned with the various polls that we are subject to in elections.
The general public is increasingly skeptical of phone inquiries, particularly with the growth of telemarketing. Are the polls becoming increasingly biased to a group of people who seems happy to discuss their personal lives and beliefs with strangers?
As a programmer, your coding of algorithms will be part of a broader business plan and you should be aware, in your programming, of any limitations that exist for the algorithm, catering for them in your code. Some algorithms have limitations on input values; how does your program handle data outside the permitted ranges? Input data is often erroneous, a very recent example being a transaction on a foreign stock exchange.
The input of a share sell order should have been something like one share at $xxxx but it went in as selling xxxx shares at $1, resulting in the trader having a huge financial liability, especially as xxxx was more shares than existed.
I remember another example when a box of chocolates was entered as having a returned value of $1,000,000 instead of $10. It happened at year end and affected the year-end results, not being picked up until after the books had been closed.
We also have the many formulas that analyze our world economy and the global warning phenomenon.
Are the modeling tools being used meaningful?
Of course, the models will be developed by a scientific team, but in writing the code you should be aware of any limit on the input data used and introduce warning messages where there seems to be an abnormal result.
In fact, your status within your company will be enhanced if you question the algorithms being used and make sure you cater to anomalies of entered data.
— Hodson is a theoretical physicist, speaker and writer. He can be reached at Bernie@genetix.ca.