• Cyber Bites - 28th March 2025

  • 2025/03/27
  • 再生時間: 10 分
  • ポッドキャスト

Cyber Bites - 28th March 2025

  • サマリー

  • * Critical Flaw in Next.js Allows Authorization Bypass* Hackers Can Now Weaponize AI Coding Assistants Through Hidden Configuration Rules* Hacker Claims Oracle Cloud Data Theft, Company Refutes Breach* Chinese Hackers Infiltrate Asian Telco, Maintain Undetected Network Access for Four Years* Cloudflare Launches Aggressive Security Measure: Shutting Down HTTP Ports for API AccessCritical Flaw in Next.js Allows Authorization Bypasshttps://zhero-web-sec.github.io/research-and-things/nextjs-and-the-corrupt-middlewareA critical vulnerability, CVE-2025-29927, has been discovered in the Next.js web development framework, enabling attackers to bypass authorization checks. This flaw allows malicious actors to send requests that bypass essential security measures.Next.js, a popular React framework used by companies like TikTok, Netflix, and Uber, utilizes middleware components for authentication and authorization. The vulnerability stems from the framework's handling of the "x-middleware-subrequest" header, which normally prevents infinite loops in middleware processing. Attackers can manipulate this header to bypass the entire middleware execution chain.The vulnerability affects Next.js versions prior to 15.2.3, 14.2.25, 13.5.9, and 12.3.5. Users are strongly advised to upgrade to patched versions immediately. Notably, the flaw only impacts self-hosted Next.js applications using "next start" with "output: standalone." Applications hosted on Vercel and Netlify, or deployed as static exports, are not affected. As a temporary mitigation, blocking external user requests containing the "x-middleware-subrequest" header is recommended.Hackers Can Now Weaponize AI Coding Assistants Through Hidden Configuration Ruleshttps://www.pillar.security/blog/new-vulnerability-in-github-copilot-and-cursor-how-hackers-can-weaponize-code-agentsResearchers Uncover Dangerous "Rules File Backdoor" Attack Targeting GitHub Copilot and CursorIn a groundbreaking discovery, cybersecurity researchers from Pillar Security have identified a critical vulnerability in popular AI coding assistants that could potentially compromise software development processes worldwide. The newly unveiled attack vector, dubbed the "Rules File Backdoor," allows malicious actors to silently inject harmful code instructions into AI-powered code editors like GitHub Copilot and Cursor.The vulnerability exploits a fundamental trust mechanism in AI coding tools: configuration files that guide code generation. These "rules files," typically used to define coding standards and project architectures, can be manipulated using sophisticated techniques including invisible Unicode characters and complex linguistic patterns.According to the research, nearly 97% of enterprise developers now use generative AI coding tools, making this attack particularly alarming. By embedding carefully crafted prompts within seemingly innocent configuration files, attackers can essentially reprogram AI assistants to generate code with hidden vulnerabilities or malicious backdoors.The attack mechanism is particularly insidious. Researchers demonstrated that attackers could:* Override security controls* Generate intentionally vulnerable code* Create pathways for data exfiltration* Establish long-term persistent threats across software projectsWhen tested, the researchers showed how an attacker could inject a malicious script into an HTML file without any visible indicators in the AI's response, making detection extremely challenging for developers and security teams.Both Cursor and GitHub have thus far maintained that the responsibility for reviewing AI-generated code lies with users, highlighting the critical need for heightened vigilance in AI-assisted development environments.Pillar Security recommends several mitigation strategies:* Conducting thorough audits of existing rule files* Implementing strict validation processes for AI configuration files* Deploying specialized detection tools* Maintaining rigorous manual code reviewsAs AI becomes increasingly integrated into software development, this research serves as a crucial warning about the expanding attack surfaces created by artificial intelligence technologies.Hacker Claims Oracle Cloud Data Theft, Company Refutes Breachhttps://www.bleepingcomputer.com/news/security/oracle-denies-data-breach-after-hacker-claims-theft-of-6-million-data-records/Threat Actor Offers Stolen Data on Hacking Forum, Seeks Ransom or Zero-Day ExploitsOracle has firmly denied allegations of a data breach after a threat actor known as rose87168 claimed to have stolen 6 million data records from the company's Cloud federated Single Sign-On (SSO) login servers.The threat actor, posting on the BreachForums hacking forum, asserts they accessed Oracle Cloud servers approximately 40 days ago and exfiltrated data from the US2 and EM2 cloud regions. The purported stolen data includes encrypted SSO passwords, Java Keystore files, key files, and enterprise manager JPS ...
    続きを読む 一部表示

あらすじ・解説

* Critical Flaw in Next.js Allows Authorization Bypass* Hackers Can Now Weaponize AI Coding Assistants Through Hidden Configuration Rules* Hacker Claims Oracle Cloud Data Theft, Company Refutes Breach* Chinese Hackers Infiltrate Asian Telco, Maintain Undetected Network Access for Four Years* Cloudflare Launches Aggressive Security Measure: Shutting Down HTTP Ports for API AccessCritical Flaw in Next.js Allows Authorization Bypasshttps://zhero-web-sec.github.io/research-and-things/nextjs-and-the-corrupt-middlewareA critical vulnerability, CVE-2025-29927, has been discovered in the Next.js web development framework, enabling attackers to bypass authorization checks. This flaw allows malicious actors to send requests that bypass essential security measures.Next.js, a popular React framework used by companies like TikTok, Netflix, and Uber, utilizes middleware components for authentication and authorization. The vulnerability stems from the framework's handling of the "x-middleware-subrequest" header, which normally prevents infinite loops in middleware processing. Attackers can manipulate this header to bypass the entire middleware execution chain.The vulnerability affects Next.js versions prior to 15.2.3, 14.2.25, 13.5.9, and 12.3.5. Users are strongly advised to upgrade to patched versions immediately. Notably, the flaw only impacts self-hosted Next.js applications using "next start" with "output: standalone." Applications hosted on Vercel and Netlify, or deployed as static exports, are not affected. As a temporary mitigation, blocking external user requests containing the "x-middleware-subrequest" header is recommended.Hackers Can Now Weaponize AI Coding Assistants Through Hidden Configuration Ruleshttps://www.pillar.security/blog/new-vulnerability-in-github-copilot-and-cursor-how-hackers-can-weaponize-code-agentsResearchers Uncover Dangerous "Rules File Backdoor" Attack Targeting GitHub Copilot and CursorIn a groundbreaking discovery, cybersecurity researchers from Pillar Security have identified a critical vulnerability in popular AI coding assistants that could potentially compromise software development processes worldwide. The newly unveiled attack vector, dubbed the "Rules File Backdoor," allows malicious actors to silently inject harmful code instructions into AI-powered code editors like GitHub Copilot and Cursor.The vulnerability exploits a fundamental trust mechanism in AI coding tools: configuration files that guide code generation. These "rules files," typically used to define coding standards and project architectures, can be manipulated using sophisticated techniques including invisible Unicode characters and complex linguistic patterns.According to the research, nearly 97% of enterprise developers now use generative AI coding tools, making this attack particularly alarming. By embedding carefully crafted prompts within seemingly innocent configuration files, attackers can essentially reprogram AI assistants to generate code with hidden vulnerabilities or malicious backdoors.The attack mechanism is particularly insidious. Researchers demonstrated that attackers could:* Override security controls* Generate intentionally vulnerable code* Create pathways for data exfiltration* Establish long-term persistent threats across software projectsWhen tested, the researchers showed how an attacker could inject a malicious script into an HTML file without any visible indicators in the AI's response, making detection extremely challenging for developers and security teams.Both Cursor and GitHub have thus far maintained that the responsibility for reviewing AI-generated code lies with users, highlighting the critical need for heightened vigilance in AI-assisted development environments.Pillar Security recommends several mitigation strategies:* Conducting thorough audits of existing rule files* Implementing strict validation processes for AI configuration files* Deploying specialized detection tools* Maintaining rigorous manual code reviewsAs AI becomes increasingly integrated into software development, this research serves as a crucial warning about the expanding attack surfaces created by artificial intelligence technologies.Hacker Claims Oracle Cloud Data Theft, Company Refutes Breachhttps://www.bleepingcomputer.com/news/security/oracle-denies-data-breach-after-hacker-claims-theft-of-6-million-data-records/Threat Actor Offers Stolen Data on Hacking Forum, Seeks Ransom or Zero-Day ExploitsOracle has firmly denied allegations of a data breach after a threat actor known as rose87168 claimed to have stolen 6 million data records from the company's Cloud federated Single Sign-On (SSO) login servers.The threat actor, posting on the BreachForums hacking forum, asserts they accessed Oracle Cloud servers approximately 40 days ago and exfiltrated data from the US2 and EM2 cloud regions. The purported stolen data includes encrypted SSO passwords, Java Keystore files, key files, and enterprise manager JPS ...

Cyber Bites - 28th March 2025に寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。