|
REUTERS
The European Commission has formally proposed postponing full implementation of its regulation on "high-risk" AI applications (such as biometric ID, HR screening, credit scoring) from August 2026 to December 2027. Strategic Insight
This allows firms more time to adjust, but also raises questions about how effectively high-risk systems will be regulated during the transition. |
|
THE VERGE
Alongside the delay, the Commission's "Digital Omnibus" package would ease other rules: allowing anonymised personal data to be used for AI training with fewer hurdles, simplifying cookie consent, and consolidating AI oversight via a new EU AI Office. Critics say this amounts to a "massive rollback" of citizen protections. Strategic Insight
The regulatory shift creates a more permissive environment for AI development in Europe, but at potential cost to privacy and consumer protection standards. |
|
REUTERS
Amid the regulatory changes, privacy advocates and civil-rights organisations are sounding alarms: up to 127 groups say the proposed changes could undermine fundamental rights and tilt the playing field further in favour of Big Tech. Strategic Insight
The public backlash from civil society highlights the tension between economic competitiveness and fundamental rights protection in AI governance. |
|
BLOOMBERG
Donald Trump has urged Congress to enact a single federal AI-oversight standard, warning that a patchwork of 50 state-level AI rules could hamper U.S. competitiveness against China. Strategic Insight
Federal standardisation signals recognition that fragmented AI regulation could weaken national competitiveness in the global AI race. |
|
IT BRIEF NZ
A report warns that by 2026 organisations will face more sophisticated AI-powered attacks—phishing, deepfakes, model-poisoning—and that boardrooms must treat AI risk as a core governance issue. Strategic Insight
AI risk is transitioning from a technical concern to a board-level governance imperative as threat sophistication accelerates. |