Home Technology Google Allegedly Advised Employees to Refrain from Sharing Confidential Information on AI Chatbot

Google Allegedly Advised Employees to Refrain from Sharing Confidential Information on AI Chatbot

Alphabet, the parent company of Google, is taking precautions regarding the use of chatbots, including its own program called Bard. While Alphabet is promoting the chatbot globally, it has advised employees against entering confidential information into AI chatbots. This policy is in place to protect sensitive data. The chatbots, such as Bard and ChatGPT, utilize generative artificial intelligence to engage in conversations and provide answers to various prompts. However, human reviewers have the ability to read these conversations, and researchers have discovered that the AI can replicate the absorbed data, posing a risk of leakage.

In addition to the confidentiality concerns, Alphabet has also warned its engineers about directly using computer code generated by the chatbots. The company acknowledged that Bard may make unsolicited code suggestions, but it still benefits programmers. Google strives for transparency regarding the limitations of its technology.

These precautions by Google reflect its desire to avoid negative repercussions from its chatbot software, especially in competition with ChatGPT developed by OpenAI and Microsoft. This race among tech giants involves substantial investments and potential revenue from advertising and cloud services linked to new AI programs.

Furthermore, Google’s caution aligns with the security standards adopted by corporations, which involve advising employees against using publicly-available chat programs. Other major companies, including Samsung, Amazon.com, Deutsche Bank, and potentially Apple, have also implemented guidelines for AI chatbots.

According to a survey conducted by Fishbowl, approximately 43% of professionals, including employees from top US-based companies, were using AI tools like ChatGPT without informing their superiors.

Google instructed its staff not to provide internal information to Bard during testing prior to its launch in February. Now, Bard is being introduced in more than 180 countries and 40 languages as a tool for fostering creativity. However, Google’s warnings extend to the code suggestions generated by Bard.

Regarding privacy concerns, Google has engaged in discussions with Ireland’s Data Protection Commission and is addressing regulators’ inquiries. Politico recently reported that Google postponed Bard’s launch in the European Union to gather more information about the chatbot’s impact on privacy.

The use of technology like chatbots has the potential to accelerate tasks, including drafting emails, documents, and even software. However, this content may contain misinformation, sensitive data, or copyrighted material. Google’s updated privacy notice advises users not to include confidential or sensitive information in their conversations with Bard.

To address these concerns, some companies have developed software capabilities for tagging and restricting the flow of certain data externally. Cloudflare is an example of a company offering such solutions.

Both Google and Microsoft offer conversational tools to business customers, but these tools come at a higher price and do not integrate data into public AI models. By default, Bard and ChatGPT save users’ conversation history, although users have the option to delete it.

Microsoft’s consumer chief marketing officer, Yusuf Mehdi, commented on the conservative stance taken by companies, indicating that it is sensible for organizations to discourage their staff from using public chatbots for work-related purposes.

While Microsoft did not disclose whether there is a blanket ban on entering confidential information into public AI programs, another executive at the company stated that he personally limits his use of such programs.

Matthew Prince, CEO of Cloudflare, compared entering confidential information into chatbots to giving a group of PhD students access to private records.

In conclusion, Alphabet’s cautionary approach to chatbot usage aligns with industry standards and aims to protect sensitive information. As businesses worldwide implement guidelines for AI chatbots, companies like Google and Microsoft are offering conversational tools while prioritizing privacy and security. These measures are vital in a competitive environment that involves substantial investments and potential revenue from new AI programs.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment