In his August 5 ComputerWorld Canada editorial, Michael MacMillan quoted from two reports indicating that IT decisions today are often made by ‘gut feel’ and that information that would help them make a better decision is often available but buried under layers of difficult to access coverings. He suggests that the situation is probably no different than what it was in 1964. So I decided to take this editorial as a cue for my next article in the ‘Memory Bank’ series.

The year 1964 coincided with my invitation to establish a Department of Computer Science at the University of Manitoba (it later became the Institute for Computer Studies, with me as director). Prior to that time I had worked with international companies. My comments will range from 1954 to 1975.

Imperial Oil was planning to install a computer in its Calgary offices. Which should it choose? I was asked to investigate. At the time (early ’60s) some test suites had been produced, supposedly designed to give pointers as to how well a machine might perform in different business environments. We considered these useless for our needs and developed our own criteria, which was to give the potential vendor a sample of our programs and show us that they would run efficiently on their systems. I spent several days travelling the freeways of Los Angeles by taxi going between possible vendors. One interesting machine was the Burroughs B5000 whose architecture was based on the Algol and Cobol languages. I did suggest this as a possible contender but the corporate people in the U.S. insisted on IBM (not an unusual occurrence).

Database software was virtually non-existent in 1964 and companies tended to develop their own, such as Imperial Oil with its Well Data Base System in Calgary, and Westinghouse in Ontario. Standards for databases were being developed as an offshoot of the Codasyl committee (which specified Cobol standards). One of the early standards proposed was so complicated that one would need a PhD in computer science to understand it and there were few of those around at that time. When database software started to develop in the early ’70s, decisions had to be made on whether to use a hierarchical, network or relational database structure. Whichever structure you chose, the product usually had severe limitations.

Software was also another management consideration. Compilers on many systems were slow and many were unreliable. As one example I did some consulting for the Hudson’s Bay Company involving the use of an online Fortran system. Everything worked fine with the initial 15 variables we were using to forecast growth, but when we added a few more variables the system crashed, even though we were well within the specified limits. This type of concern was such that when I negotiated with IBM for the purchase of their latest offering of an IBM 360/65, I insisted on having a software penalty clause. If Fortran did not work I received 75 per cent of the equivalent machine rental until it worked satisfactorily, Cobol had a 50 per cent, Assembly language a 100 per cent penalty, et cetera (total penalty could never exceed 100 per cent). I received immediate inquiries from Shell Oil in Toronto on how I had managed this.

Hardware reliability was another management issue. I remember a consulting proposal that was between an IBM 7044 and an IBM 360/65, which I happened to have. A group at MIT told me that the 360/65 was potentially unreliable and that the 7044 should be used, which was what the consultants recommended. As it turned out I had a few initial problems with down time but they were all fixed after a few words with IBM.

Vendors were also very reluctant to attach equipment from other vendors to their systems, even when purchased. I told IBM plainly that as I had purchased the equipment I should be able to do whatever I liked with it and they hummed and hawed about problems of maintenance and other possible hitches. I finished up by attaching two remote CDC systems at the university’s medical school and a Digital PDP9 that, in turn, was attached to a cyclotron in the department of physics. Everything worked fine (I had some good software people) and it rather startled the industry. I received a phone call from New York asking if we had attached a PDP system to our IBM mainframe. When they received a positive answer they asked us to send them the software to do it, no matter what it cost.

So yes, we had management decision problems in 1964, but we often had to fly ‘by the seat of our pants’ since there were no obscure hidden layers of data to access that might have helped us. It was all in our IT network, where most of us knew each other, using the expensive (at that time) phone system for access.

Hodson is an Ottawa-based IT industry veteran who has helped develop Canadian computer science programs. Contact him at

Related Download
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center Sponsor: Lenovo
3 reasons why Hyperconverged is the cost-efficient, simplified infrastructure for the modern data center
Find out how Hyperconverged systems can help you meet the challenges of the modern IT department. Click here to find out more.
Register Now