French regulators claim the iPhone 12 exceeds radiation limits. A judge in the Google anti-trust case appears to think that Firefox is a search engine. And a powerful senator is going after Elon Musk after it was revealed he might have helped the Russians avoid a devastating defeat.
These and more top tech stories on Hashtag Trending
I’m your host Jim Love, CIO of IT World Canada and Tech News Day in the US.
The French regulator ANFR has called for the removal of the iPhone 12 from the French market, citing its findings that the device emits radiation beyond the EU’s permissible limit.
In response, Apple stated that multiple international bodies have certified the iPhone 12, and it adheres to radiation standards and regulations globally.
Apple has provided the agency with lab results from both the company and third-party labs, confirming the iPhone 12’s compliance with regulations.
ANFR has emphasized that Apple must take corrective measures for phones already in use or consider recalling the equipment. The specific absorption rate (SAR) measures the rate of radiofrequency energy absorbed by the body, and ANFR’s tests revealed that the iPhone 12 exceeded the set limits in one of the two SAR tests conducted.
Source included: Axios
The Department of Justice revealed that Google is reportedly spending $10 billion a year to maintain its status as the world’s leading online search engine. This comes as the DoJ is pursuing a new antitrust suit against Google.
The report highlighted that Google has entered into agreements with major companies like Apple, Samsung, and Mozilla Firefox. These agreements ensure that Google remains the default search engine on their respective smartphones and web browsers. The DOJ’s lawyers have labeled these agreements as “powerful strategic weapons” that effectively block competitors from challenging Google’s dominance in the search market. Google, on the other hand, refutes these claims, emphasizing that its search engine is superior and users have the freedom to switch to other search engines if they wish.
This antitrust trial, being touted as the most significant in two decades, could reshape the tech landscape and influence how big tech companies operate in the future.
But the complexities of this trial may be difficult for the justice system. As Arstechnica reported today, “wading through these arguments will require a decent knowledge of tech history” something the judge in the case seemed to struggle with today – not only did he not seem to know how search engines or online advertising works, he appeared to not know the difference between a browser and a search engine.
Not exactly comforting to either the prosecution or the defendant when the stakes are this high.
Eight major tech companies have promised to rigorously test their AI applications for security before launching them, according to an announcement from the White House. These companies, including CEOs of the tech giants committed to developing machine-learning software in a manner that is safe, secure, and trustworthy.
This commitment encompasses present future generative AI models.
Each company has said they will undergo both internal and external audits, allowing independent experts to assess potential misuse of their models.
This is in response to public and government concern for the potential of misusing AI – for example, to generate information that could aid in creating biochemical weapons or exploit cybersecurity vulnerabilities.
To mitigate these risks, the companies have agreed to protect their intellectual property, ensure the confidentiality of their neural network weights, and provide users with a mechanism to report vulnerabilities or bugs. Additionally, they will publicly disclose their technology’s capabilities and limitations, including potential biases.
The White House reports that they are viewing these commitments as a precursor to more formal regulation, emphasizing that this is just the beginning of a comprehensive approach to harnessing AI’s potential while managing its risks.
Source included: The Register
And for companies that are not tech giants but are struggling to deal with the developing new policies for AI, a company named Contrast Security has launched an open-source project aimed at establishing a clear and actionable policy for managing the privacy and security risks associated with Generative AI and Large Language Models (LLMs) in organizations. The policy addresses:
- Ownership and intellectual property (IP) rights of AI-generated software.
- Safeguarding against the creation or use of AI-generated code containing malicious elements.
- Preventing employees from leveraging public AI systems to learn from proprietary data, whether it belongs to the organization or third parties.
- 4. Blocking unauthorized or underprivileged individuals from accessing sensitive or confidential data.
This policy serves as a foundational guide for CISOs, security experts, compliance teams, and risk professionals. David Lindner, Chief Information Security Officer at Contrast Security, emphasized the importance of a clear AI policy, stating, “As AI continues to evolve, we need to ensure that its potential is harnessed in a responsible and ethical manner.”
I downloaded the policy and gave it a quick once over before going to air, and it is certainly a good start. There’s a link in the text version of the podcast at itworldcanada.com/podcasts
The link: GitHub page
Source included: SD Times
And in this installment of the X files, Elon Musk is having another disappointing week. It was announced that his Starlink service which was projected to have 20 million subscribers by the end of 2022 had only 1 million by the end of 2022. The good news is that subscribers are growing and the company claims that it now has a million and a half subscribers and that it is profitable.
Now the bad news – an influential US Senator, Elizabeth Warren is calling for a Congressional investigation into Elon Musk’s decision to deny a request to activate SpaceX’s Starlink satellite communications network over a portion of the Crimean coast. This decision reportedly hindered a Ukrainian drone attack on Russian military ships in the Black Sea. Musk’s refusal was initially revealed by CNN, citing excerpts from an upcoming biography on Musk by Walter Isaacson. Musk defended his decision, stating that approving the request would have made “SpaceX explicitly complicit in a major act of war and conflict escalation.” Critics argue that Musk’s decision assisted Moscow during its unprovoked invasion of Ukraine. Warren emphasized the need to scrutinize the contracts between SpaceX and the Department of Defense, questioning the power Musk holds in making such decisions. Since the incident, Ukraine has launched multiple attacks on Russian naval targets in the Black Sea.
Source included: Axios
That’s the top tech news stories for today. For more fast reads on top stories, check us out at TechNewsDay.com or ITWorldCanada.com on the homepage.
Hashtag Trending goes to air 5 days a week with a special weekend interview show we call “the Weekend Edition.”
You can get us anywhere you get audio podcasts and there is a copy of the show notes at itworldcanada.com/podcasts
I’m your host, Jim Love. Have a Thrilling Thursday!