Palo Alto Networks 可以即時阻 止 AI 應用程式數據洩漏

Jul 02, 2023
1 minutes
... views

ChatGPT 是有史以來成長速度最快的消費者應用程式,僅在推出兩個月後就達到每月 1 億個活躍使用者。雖然這些 AI 應用程式可以大幅提升生產力和創造力,但是同時也對現代企業構成嚴重的數據安全風險。

許多企業可能會驚訝地發現,他們的員工已經在使用 AI 式工具來簡化日常工作流程,而這可能會將敏感的公司數據暴露於風險之中。軟體開發人員可上傳專屬程式碼以尋找及修復錯誤,而公司通訊團隊則可要求協助撰寫敏感的新聞稿。

為了防止不斷增加地將敏感數據洩露給 AI 應用程式和 API 的風險,我們很高興地宣佈我們將在新世代 CASB 解決方案中引入一系列新功能,以保護 ChatGPT 和其他 AI 應用程式,其中包括:

  • 全面的應用程式使用情況可視性,可全面監控所有的 SaaS 使用活動,包括員工因使用新型和新興具生產力的 AI 應用程式而使得數據暴露於風險之中。
  • 精細 SaaS 應用程式控制,可讓員工安全地存取業務關鍵應用程式,同時限制或封鎖對於高風險應用程式的存取 — 包括不具合法商業目的之具生產力的 AI 應用程式。
  • 進階數據安全性,可以提供機器學習式數據分類和數據遺失防護,以偵測並防止動機善的員工無意間將公司秘密、個人可識別資訊 (PII) 和其他敏感數據洩漏至具生產力的 AI 應用程式。

Palo Alto Networks 如何防範 ChatGPT 數據洩露

若您是現有的 Palo Alto Networks Prisma Access 或新世代防火牆客戶,可購買 NG-CASB 搭售方案授權以立即啟動全新的 ChatGPT 功能。Palo Alto Networks 新世代 CASB 客戶現在可以提供可視性、控制使用量並防止敏感數據遺失,以防範數據洩露至 ChatGPT 的風險。

全面的應用程式使用情況可視性

現在,無論是新世代防火牆VM-SeriesPrisma Access Prisma SD-WAN 的客戶都能取得對於其網路流量的完整可視性。ChatGPT 的流量是透過 App-ID openai-chatgpt 加以擷取,可利用防火牆的簡易搜尋功能進行探索。我們的新世代 CASB 客戶現在已能夠充分利用額外的應用程式層級安全性,以及對於 ChatGPT 隱私權見解。

精細 SaaS 應用程式控制

目前包括使用者或使用者群組,以及應用程式功能都能精細地控制對於 ChatGPT 的存取。例如,您可以限制 ChatGPT 的存取並只開放給一小群具有權限的使用者,或明確地封鎖應用程式的傳訊功能和 API。

您可以使用 App-ID 控制以透過防火牆政策加以控制,甚至可以在新世代 CASB、Prisma Access 或新世代防火牆中使用新的「人工智慧」SaaS 類別來進行控制。

進階數據安全性

我們的企業數據遺失防護 (DLP) 解決方案可即時執行內嵌、機器學習式數據分類,以偵測並阻止敏感數據從您的網路流出。這可讓管理員制定內嵌數據遺失防護政策,同時識別並阻止敏感數據外洩至 ChatGPT 等具生產力的 AI 應用程式,而無需完全封鎖應用程式的存取。

由於 NG-CASB 與企業 DLP 可透過包括新世代防火牆和 Prisma Access 在內的整個網路安全平台進行交付,因此客戶可實施單一且一致的數據安全政策,以適用於辦公室內的使用者,以及居家或出差的員工。

您只需要將 NG-CASB 搭售方案新增至 Palo Alto Networks 新世代防火牆或 Prisma Access 部署,或是新增內嵌 DLP 並制定以上所述的政策,就可以輕鬆地協助企業防止敏感的數據洩露至 ChatGPT。

隨著新的具生產力的 AI 應用程式不斷出現,我們將持續擴充我們的應用程式目錄,為我們的客戶提供全面的數據保護。如需深入了解如何防範因使用 ChatGPT 等 AI 應用程式所造成的數據洩露,請與 Palo Alto Networks 聯絡以立即開始使用。


Palo Alto Networks Stops AI App Data Leakage in Its Tracks

May 03, 2023
4 minutes
... views

ChatGPT is the fastest growing consumer application in history, with 100 million monthly active users just two months after launch. While these AI apps can significantly boost productivity and creative output, they also pose a serious data security risk to the modern enterprise.

Many organizations may be surprised to learn that their employees are already using AI-based tools to streamline their daily workflows, potentially putting sensitive company data at risk. Software developers can upload proprietary code to help find and fix bugs, while corporate communications teams can ask for help in crafting sensitive press releases.

To safeguard against the growing risk of sensitive data leakage to AI apps and APIs, we are excited to announce a new set of capabilities to secure ChatGPT and other AI apps as part of our Next-Generation CASB solution that includes:

  • Comprehensive app usage visibility for complete monitoring of all SaaS usage activity, including employee use of new and emerging generative AI apps that can put data at risk.
  • Granular SaaS application controls that safely enable employee access to business-critical applications, while limiting or blocking access to high risk apps—including generative AI apps—that have no legitimate business purpose.
  • Advanced data security that provides ML-based data classification and data loss prevention to detect and stop company secrets, personally identifiable information (PII), and other sensitive data from being leaked to generative AI apps by well-intentioned employees.

How Palo Alto Networks Safeguards Against ChatGPT Data Leakage

If you are an existing Palo Alto Networks Prisma Access or NGFW customer, you can purchase a license for our NG-CASB bundle and the new ChatGPT capabilities activate immediately. Palo Alto Networks Next-Generation CASB customers can now help secure against the risk of data leaks to ChatGPT by providing visibility, controlling usage, and preventing sensitive data loss.

Comprehensive App Usage Visibility

Today, any NGFW, VM-Series, Prisma Access, or Prisma SD-WAN customers have complete visibility into the traffic on their network. Traffic to ChatGPT is captured through the App-ID openai-chatgpt, which can be discovered via a simple search on the firewall. Our Next-Gen CASB customers now have the ability to leverage additional application-level security and privacy insights into ChatGPT.

Granular SaaS Application Controls

Access to ChatGPT can be granularly controlled by user or user group, and by application function. For example, access to ChatGPT can be limited to a subset of privileged users, or the app’s messaging features and API can be specifically blocked.

Control can be accomplished via firewall policy using App-ID control, or even via a new “artificial intelligence” SaaS category within Next-Gen CASB, Prisma Access, or NGFW.

Advanced Data Security

Our Enterprise Data Loss Prevention (DLP) solution performs inline, ML-powered data classification in real-time to detect and block sensitive data from leaving your network. This empowers administrators to create inline data loss prevention policies that can identify and stop sensitive data loss to generative AI apps such as ChatGPT without having to completely block access to the application.

Because NG-CASB with Enterprise DLP is delivered across our entire network security platform, including both NGFW and Prisma Access, customers can apply a single, consistent data security policy that applies to users in the office, as well as to employees at home or abroad.

Securing your organization from sensitive data loss to ChatGPT is as easy as adding the NG-CASB bundle to your Palo Alto Networks NGFW or Prisma Access deployment, or simply adding inline DLP, and creating the policies described above.

As new generative AI apps arise on an ongoing basis, we will continue expanding our app catalog to provide comprehensive data protection for our customers. For more information on how to help stop data leakage that can arise from the use of AI-based apps like ChatGPT, contact Palo Alto Networks to get started today.

 


Subscribe to the Newsletter!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.