Despite Canada’s status as a leader in education, the country’s stagnant academic scores among young people is a bad sign for future generations hoping to integrate complex technologies, such as artificial intelligence, across society, according to Phil Vokins, Intel Canada interim general manager.
“We’re saying is that Canada has lagged behind in some of the rankings for STEM education, technical education. And that was a problem…technology needs to have a more prominent role in education from early grades right through higher education.”
Canada received high-praise for earning top 10 spots in PISA world rankings out of 70 participating nations, scoring well above the 500 point average in reading, science, and arithmetic. The top three spots belong to Singapore, Hong Kong, and Japan.
Between 2012 to 2017, Canadian university enrollment for mathematics, computers, and information science rose by 36 per cent, the highest growth of all education sectors. The enrollment rate for architecture, engineering, and related technologies grew by 11 per cent in the same period.
Although the reports positioned Canada to be a leader in education, a closer examination by Man-Wai Chu, assistant professor at the University of Calgary, indicated that Canada’s PISA scores have either lowered or remained stagnant between 2006 to 2015. A similar trend was seen in post-secondary education where top Canadian universities have steadfastly held the same rankings. Furthermore, in her report, Chu stated that almost 10 per cent of Canadian 15-year olds do not have the science proficiency level required to participate fully in society.
STEM education is critical in preparing the future generation for an AI connected world. An article by the Harvard Business Review called for an update to the computer science curriculums at the secondary level, hoping to place more emphasis on coding. Invoking interest in computer science in more kids also increases diversity, which currently presents a major challenge for AI.
At the forefront of AI development is the health care industry. A Forbes report estimates that AI investment could top 6.6 billion by 2021 and that AI could help save $150 billion per year by 2026. The report listed robot-assisted surgery, virtual nursing assistants, and administrative workflow assistance as the most popular scenarios in which AI could be implemented. Datto’s 2019 MSP report listed that 34 per cent of all MSPs are targeting health care.
To provide just one example of AI benefit in the health care industry, Vokins demonstrated a prototype software developed by a Canadian partner to aid health care workers in determining a patients status through facial scanning. The app, which can run on any iPhone, scans the blood flow underneath the skin to instantly display blood pressure, heart rate, and breaths per minute.
Despite its life-saving potential, Vokins said that the general public may not be comfortable to have intimate personal details so readily available.
“[The software] is not ready yet ready for prime time to start giving blood pressure because we don’t know the public is able to deal with that information, but this is where the technology is going,” said Vokins. “Our teaching of kids in schools needs not only to teach programming and STEM education…But at the same time, it needs to ask the ethical questions of ‘is this a good thing?’ What if this great technology that you’re developing or your brilliant idea you have is used for a different application than the one you originally attended?”
To make sure future generations can help speed up the adoption of ethical AI health care solutions, Vokins stressed the gravity of good STEM education for future generations.
“I think it is incumbent on us to be at the leading edge of technical education for all the right reasons as discussed – the next generation is not only the developers of the technology, they are the politicians who will legislate for it, the business leaders who will determine its application, and the citizens who will vote for governments to act in everyone’s best interests.”
Researchers at the Stanford University School of Medicine echo the same sentiment, calling for physicians and scientist to take caution when mixing AI into their decision-making process.
“Because of the many potential benefits, there’s a strong desire in society to have these tools piloted and implemented into health care,” said Danton Char, assistant professor of anesthesiology, perioperative and pain medicine in a Stanford article. “But we have begun to notice, from implementations in non-health care areas, that there can be ethical problems with algorithmic learning when it’s deployed at a large scale.”
The researcher’s primary concerns include data bias when creating clinical recommendations, educating physicians on how the algorithms are created, patient data being used without adequate clinical experience, and that AI tools could alter the relationship between doctors and patients.