Governments should prioritize the tangible implications of AI rather than getting swept up in Big Tech’s hype. – GretAi News

Governments should prioritize the tangible implications of AI rather than getting swept up in Big Tech’s hype. – GretAi News

Statistics Canada recently released a detailed report estimating which professions are likely to be affected by artificial intelligence in the next few years.

It concludes with an optimistic message for education and health-care professionals, suggesting that not only are they expected to retain their jobs, but their productivity will be enhanced by AI advancements. However, the outlook is grimmer for those in finance, insurance, information and cultural industries, who are predicted to see their careers derailed by AI.

Should doctors and teachers now breathe easy, while accountants and writers panic? Maybe, but not because of the data in this report.

What Statistics Canada offers here is a relatively meaningless exercise. It assumes that it is the technology itself and how well it complements human efforts, not the business models designed to undermine our shared humanity, that is the key determinant. By making this mistake, the report is yet another casualty of buying into corporate-driven optimism at the expense of uglier business realities.

High exposure to AI hype

Corporations pushing new innovations or products that play on our greatest hopes and fears is nothing new. The only thing that may be novel is the sheer scale of Big Tech’s hopes for AI impact, which seem to reach every industry.

It’s no surprise, then, that there is widespread fear about what industries and sectors will be replaced by AI. Nor is it surprising that Statistics Canada would seek to allay some of those fears.




Read more:
The future of work will still include plenty of jobs


The study groups jobs into three categories:

  • those with high AI exposure and low complementarity, meaning humans may be competing directly with machines for these roles;
  • those with high AI exposure and high complementarity, where automation could enhance the productivity of the workers who remain essential to the job;
  • and those with low AI exposure, where replacement doesn’t seem to be a threat yet.
Statistics Canada recently released a detailed report estimating which professions are likely to be affected by artificial intelligence in the next few years.
THE CANADIAN PRESS/Nathan Denette

The report’s authors claim their approach — examining the relationship between exposure and complementarity — is superior to older methods that looked at manual versus cognitive or repetitive versus non-repetitive tasks when analyzing the impact of automation on workplaces.

However, by focusing on these categories, the study still buys into corporate hype. These categories of analysis were developed in 2021. Over the past few years, new windows have opened up, allowing us a clearer view of the ways Big Tech is rushing to deploy AI. The newly revealed unethical tactics render the predictive categories of exposure and complementarity fairly meaningless.

AI is often driven by people

Recent developments have shown that even jobs with high AI exposure and low AI complementarity are still relying on humans behind the scenes to do essential work. Take Cruise, the self-driving car company bought by General Motors in 2016 for more than $1 billion. Cab driving is a job with high AI exposure and low AI complementarity — we assume a cab is either being controlled by a human driver or, if it’s driverless, by AI.

As it turns out, Cruise’s “autonomous” cabs in California were not, in fact, driverless. There was remote human intervention every few miles.

If we were to accurately analyze this job, there are three categories to consider. The first is for in-car human drivers, the second is remote human drivers and the third is autonomous AI-driven vehicles. The second category makes complementarity fairly high here. But the fact that Cruise, and likely other tech companies, tried to keep this under wraps raises a whole new world of questions.

A General Motors Cruise vehicle drives through the streets of San Francisco in October 2023.
(Shutterstock)

A similar situation emerged at Presto Automation, a company specializing in AI-powered drive-thru ordering for chains like Checkers and Del Taco. The company described itself as one of the biggest “labor automation technology providers” in the industry, but it was revealed that much of its “automation” is driven by human labour based in the Philippines.

Software company Zendesk presents another example. It once charged customers based on how often the software was used to try to resolve customer problems. Now, Zendesk only charges when its proprietary AI completes a task without humans stepping in.




Read more:
Long hours and low wages: the human labour powering AI’s development


Technically, this scenario could be described as high exposure and high complementarity. But do we want to support a business model where the customer’s first point of contact is likely to be frustrating and unhelpful? Especially knowing businesses will roll the dice on this model because they won’t be charged for those unhelpful interactions?

Scrutinizing business models

As it stands, AI presents more of a business challenge than a technological one. Government institutions like Statistics Canada need to be careful not to amplify the hype surrounding it. Policy decisions need to be based on a critical analysis of how businesses actually use AI, rather than by inflated predictions and corporate agendas.

To create effective policies, it’s crucial that decision-makers focus on how AI is truly being integrated into businesses, rather than getting caught up in speculative forecasts that may never fully materialize.

The role of technology should be to support human welfare, not simply reduce labour costs for businesses. Historically, every wave of technological innovation has brought about concerns about job displacement. The fact that future innovations may replace human labour is not new or to be feared; instead, it should prompt us to think critically about how it’s being used, and who stands to benefit.

Policy decisions, therefore, should be rooted in accurate, transparent data. Statistics Canada, as a key data provider, has an essential role to play here. It needs to offer a clear, unbiased view of the situation, ensuring policymakers have the right information to make informed decisions.

The post “Governments need to focus on AI’s real impact, not get caught up in the hype generated by Big Tech” by David Weitzner, Associate Professor of Management, York University, Canada was published on 09/15/2024 by theconversation.com

The post “Governments should prioritize the tangible implications of AI rather than getting swept up in Big Tech’s hype. – GretAi News” by GretAi was published on 09/15/2024 by news.gretai.com