Site icon IT World Canada

Hashtag Trending Jul.14-Tech layoffs impact Indian outsourcers; New approach to the cost of AI processing; Right to be forgotten and AI

Hashtag Trending Podcast

Tech layoffs impact Indian outsourcers and what does that mean for North America outsourcers and MSPs. A new approach to the “eye-watering” cost of AI processing and the right to be forgotten – how does that apply to AI.

 

And yes, Elon Musk announced a new AI he’s founding called xAI, but we aren’t covering that until there’s something more to say than that. For those who get it – this is Hashtag Trending, not Seinfeld.

These and more top tech news stories on Hashtag Trending.  

I’m your host Jim Love, CIO of IT World Canada and Tech News Day in the US.

It’s no surprise that tech industry has faced a massive wave of layoffs, with over 200,000 people already handed pink slips this year, marking a staggering 315 per cent increase from last year according to layoff.fyi. In addition tech companies are freezing hiring, deferring joining dates, and even pausing salary hikes. 

While North America has been able to absorb many of these job losses due to a massive shortage of IT candidates, it has had a huge impact on the Indian IT sector which relies heavily on North American and European markets for about 80 per cent of its revenue, and they’re feeling the pinch. The economic uncertainty in these regions has led to reduced technology spending, impacting revenue growth for Indian IT services giants like TCS, Infosys, Wipro, and HCLTech. 

While we don’t have any layoff numbers specific to India, the Naukri JobSpeak Index reveals a 3 per cent decrease in white-collar hiring in India, with industries like IT, Retail, BPO, Education, and others showing cautious hiring patterns. The IT industry, in particular, has seen a significant 31 per cent decrease in new job opportunities compared to the last year. 

Amazon has postponed hiring new graduates for at least six months amid widespread job cuts. A fresh graduate from Kolkata’s Jadavpur University told Analytics India that they were supposed to start as a Software Development Engineer (SDE-1) in June, but were told that would be deferred at the end of May (next year) with no compensation for the delay.

As well, IT giant Infosys has delayed salary hikes for its non-senior management staff.

What the long term impact on the Indian economy is we do not know. Nor do we know what Indian outsourcing firms will do in response. Will we return to the price wars of the 1990’s as Indian IT companies try to hold market share? What will that do to North American outsourcing firms?  Buckle up.

Sources include: Analytics India

Sam Altman called the cost of processing generative AI “eye-watering” and we did a story yesterday about how OpenAI may be experimenting with a new way to reduce the computational costs of large language models. 

OpenAI has been losing money and we presume surviving on the investment that Microsoft has made in terms of computing power and cash. Its current business model of getting people to pay 20 dollars a month for premium service and fees for using the API was supposed to generate about 200 million dollars this year. Nothing to sniff at, but unlikely to even make a dent in the cost of computing power that they need. 

In fact, some experts are speculating that without a major new development, generative AI could “hit a wall” in terms of the cost of computing. 

So a story in the Next Platform has another promising development.  Researchers from the University of Washington and the University of Sydney have proposed a new architecture called the Chiplet Cloud, which could potentially revolutionize the way we run AI models. This innovative approach could outperform Nvidia’s A100 GPU and Google’s TPUv4 accelerator when it comes to running large language models like OpenAI’s GPT-3 and Google’s PaLM 540B.

The Chiplet Cloud architecture, as described in a recent research paper, is designed to be a cost-effective solution for running AI models. It’s essentially a wafer-scale, massively parallel, SRAM-laden matrix math engine, similar to the one designed by Cerebras Systems, 

In simple English?  It’s broken down into smaller, more affordable units that are then stitched back together using fast interconnects.

The researchers estimate that the cost of making an accelerator for large language model inference is around $35 million which includes the cost of CAD tools, IP licensing, masks, BGA packaging, server designs, and human labor. That investment would be peanuts if the performance and cost potential of this architecture turns out to be real.

The Chiplet Cloud architecture could potentially bring a 94X reduction in cost per 1,000 tokens (a token is approximately a word or a section of a word) and a huge reduction in latency compared to Nvidia’s A100 GPU. Even against Google’s TPUv4, the Chiplet Cloud shows a 15X reduction in cost per 1,000 tokens generated running inference and a 19X reduction in latency.

This development could have significant implications for the tech industry, particularly for companies like Microsoft, Google, and Amazon, who are continually seeking ways to make AI inference less expensive. The Chiplet Cloud could be a game-changer, potentially driving down costs and improving performance in the AI space. 

Now, here’s a cool fact. At least one of the researchers (I haven’t checked on the others, but at least one of the researchers) is now a Microsoft employee. Granted, this research was done before he joined Microsoft.  But imagine irony if Microsoft ends up having the architecture that would be the way that Google’s AI was able to be run at a profit.

Shout out to the next platform if you want to geek out and dive even further into this, there’s a link to their in depth article on the text version of this podcast.

Sources include: The Next Platform

Here’s a great question I hadn’t thought of.  How does a large language model deal with the “right to be forgotten?

Researchers from Australia’s National Science Agency (CSIRO’s Data61) and Australian National University have raised concerns about the ability of AI models to comply with data protection laws, particularly the “right to be forgotten” or right to erasure under Europe’s General Data Protection Regulation (GDPR). 

OpenAI’s ChatGPT, or Google’s Bard or Meta’s Llama or any other large language models might find compliance challenging due to their unique data processing and storage methods. These models learn from vast amounts of data, and once trained, it’s hard to identify specific data and attribute it to particular individuals, making the right to erasure complex to implement.

The researchers highlight that removing personal data from a trained model is difficult, and current methods like “machine unlearning” are still being explored. They also point out the issue of “hallucinated data” – facts made up by an AI bot – which cannot be accessed reliably or even removed from the model.

This issue is not just theoretical. In March, Italian authorities temporarily suspended access to ChatGPT due to data protection concerns, and investigations into ChatGPT data compliance are ongoing in Canada, France, and Spain.

OpenAI, the maker of ChatGPT, has outlined how it trains models and complies with privacy laws. The company acknowledges that ChatGPT may include personal information and provides an email address for handling data subject access requests. But it’s not immediately clear how OpenAI handles data removal requests or how long such requests take to implement.

In related news today, Google announced that it has satisfied European regulators and will roll out Bard to Europe and other jurisdictions. Have they solved the problem of the right to be forgotten? There’s no info on that yet, only the fact that they had a huge jump in their share price when they made this announcement.

If anyone has requested the right to be forgotten and gotten an answer, or has more info on this, please let me know at jlove@itwc.ca or use the comment section on our site to reach me. 

Sources include: Reuters and The Register

The European Council has adopted new regulations mandating that companies, including tech giant Apple, ensure that batteries in their products are user-replaceable. This regulation applies to all batteries, including those in portable devices, electric vehicles, industrial batteries, and batteries used for light transport.

The regulation is designed to address the environmental impact of batteries at every stage of the life cycle. It sets ambitious targets for waste battery collection and lithium recovery, underscoring the commitment to recycling and resource recovery. By 2027, portable batteries incorporated into appliances should be removable and replaceable by the end user, giving manufacturers ample time to adapt their product designs.

The regulation is expected to meet resistance from companies like Apple, which currently offers an official Self Repair Program, enabling individuals to access the required parts and tools for repairing their devices, including battery replacements.

I get that this might be a pain for Apple, but our job isn’t to make their life easier. Nobody is pretending that this 3 trillion dollar, highly profitable company is keeping our prices down with their design. The real issue should be how does their design serve us. 

Why is it that European regulators get this and nobody in North America thinks about it.

There I go, thinking out loud again.

Sources include:  The Register and Reuters 

And that’s the top tech news stories for today.

Hashtag Trending goes to air five days a week with a special weekend interview episode called Hashtag Trending, the Weekend Edition.  Hope you check it out this weekend, it’s going to be great. You can follow us on Google, Apple, Spotify or wherever you get your podcasts. 

We’re also on YouTube five days a week with a video newscast only there we are called Tech News Day. 

If you want to catch up on more news quickly, you can read these and more stories at TechNewsDay.com and at ITWorldCanada.com on the home page.

We love your comments. You can find me on LinkedIn, Twitter or on our Mastodon site as @therealjimlove@technews.social and yes, I’ve gone to the dark side, I’m on Threads as therealjimlove 

I’ll check it about as often as I do Facebook, so be patient.

If you want to reach me quickly, you might want to go to the article at itworldcanada.com/podcasts and you’ll find a text version with additional links and references.  Click on the x if you didn’t like the stories, or the check mark if you did, and tell us what you think. 

And you can pass on any comments to me. I’ll get back to you.

I’m your host, Jim Love. Have a Fabulous Friday!

Exit mobile version