最好看的新闻,最实用的信息
12月27日 20.1°C-23.2°C
澳元 : 人民币=4.54
墨尔本
今日澳洲app下载
登录 注册

Hackers Add ChatGPT to Their Arsenal in Cyberattacks

2024-05-24 来源: 搜狐时尚 原文链接 评论0条

AsianFin--Microsoft and OpenAI have revealed that hackers are already using large language models like ChatGPT to launchcyberattacksmore quickly and effectively. In a newly published researchreport, Microsoft and OpenAI have detected attempts by Russian, North Korean, Iranian-backed groups using tools like ChatGPT for research into targets, to improve s, and to help build social engineering techniques.

“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,”said Microsoftin the report.

The Strontium group, linked to Russian military intelligence, has been found to be using large language models (LLMs) “to understand satellite communication protocols, radar imaging technologies, and specific technical parameters.” The hacking group, known also as APT28 or Fancy Bear, has been active during the Russia-Ukraine war and was previously involved in targeting Hillary Clinton’s presidential campaign in 2016.

The grouphas also been using LLMsto help with “basic ing tasks, including file manipulation, data selection, regular expressions, and multiprocessing, to potentially automate or optimize technical operations,” according to Microsoft.

New technologies inevitably bring new security issues and demands. However, the challenge lies in the fact that attackers often “discover security risks and launch attacks before us, requiring us to respond and adapt more quickly,”said Chen Fen, Senior Vice President of AsiaInfo Security.

As of today, the application scenarios for large models remain limited. Industry professionals consider factors such as the immaturity of the technology, the cost-benefit ratio, and the lack of suitable application scenarios. However, hackers only need one reason: using large models as new weapons to break through high-value shields.

The nature of attacks determines the nature of defenses, making large models a must-have for cybersecurity companies. Global security firms are acting swiftly. In May, Microsoft officially launched Security Copilot. Last year, Google introduced its dedicated cybersecurity large model, and cybersecurity giants Palo Alto Networks and CrowdStrike have integrated large model capabilities into their security operations platforms.

In China, over 80% of cybersecurity companies are incorporating large model technology into their products, with 30% already researching large model security. This has led to a wave of security startups.

Following the release of ChatGPTinxx, artificial general intelligence (AGI) technology represented by large models has fueled a frenzy among global hackers. AGI has upgraded cyberattacks and cybercrime. Previously, it took hackers months to develop a single attack virus. Now, with AGI tools, this can be done in minutes, which has greatly improved the efficiency and scope of attacks.

Large models have a strong understanding of programming languages, allowing attackers to quickly identify software vulnerabilities. Additionally, some opportunistic hackers are using AI algorithms for deepfake videos, leading to a new wave of online scams.

AsiaInfo Security has also discovered that attackers are targeting AI computing infrastructure and GPU clusters, which are of high value. Within a year, dozens of different attack methods targeting large models have emerged.

Earlier this year, a computing cluster of thousands of servers in the United States was compromised and used for Bitcoin mining. Once hackers see profit, they quickly target these high-value assets. Even large models themselves could potentially be exploited.

In one attack replicated by AsiaInfo Security, a specially crafted attack sample was submitted to a large model. The sample was not a normal prompt but a complex language structure. When a normal request was submitted afterward, it took the model over 60 seconds to respond, compared to the original time of under three seconds. This shows that if core applications are driven by AI, such attacks could lead to a significant increase in computational consumption, causing denial-of-service attacks and crippling core business operations.

Chen emphasized that "cybersecurity battles have evolved from human-to-human confrontations to AI-to-AI confrontations. Only AI-driven cybersecurity detection and protection technologies can identify AI-driven hacker attacks."

However, the concept of "security large models" has also been met with skepticism, with some questioning whether it is merely a buzzword. There are precedents for using AI in security, such as spam detection and automated vulnerability repair using machine learning algorithms.

AsiaInfo Security has taken a cautious approach, focusing on whether large models can become an inherent capability of their products. Instead of immediately integrating large models across their product lines, they spent more time building a foundational framework, MaaS (Model-as-a-Service), to efficiently deploy large model capabilities across their security applications.

According to Zhang Yaqin, an academician at the Chinese Academy of Engineering, the advent of large models represents the construction of a new ecosystem. Large models will become the new operating systems, and the scale of the AI era's ecosystem will be at least an order of magnitude higher than that of the mobile internet era.

AsiaInfo Security's XPLAN includes two parts: Security For AI, focusing on protecting AI computing infrastructure and large models, and AI For Security, focusing on developing vertical large models for the cybersecurity industry and creating intelligent security applications.

Collaborativeachievements in the security ecosystem are emerging. The Cyber Extortion Response and Governance Center, initiated by AsiaInfo Security and other partners, aims to establish integrated cyber extortion response and governance operations. Additionally, the East-West Data and Computing Security Innovation Center has made breakthroughs in model security, privacy computing, and full-stack cloud security.

AsiaInfo Security has also partnered with the HarmonyOS ecosystem to enhance terminal security capabilities, ensuring a safer and more reliable environment for users.

The joint efforts in technology, policy, regulations, and societal consensus are expected to shape the long-term processof building a secure AI ecosystem.

今日评论 网友评论仅供其表达个人看法,并不表明网站立场。
最新评论(0)
暂无评论


Copyright Media Today Group Pty Ltd.隐私条款联系我们商务合作加入我们网站地图

法律顾问:AHL法律 – 澳洲最大华人律师行新闻爆料:[email protected]

电话: (03)9448 8479

联系邮箱: [email protected]

友情链接: 华人找房 到家 今日支付Umall今日优选