ChatGPT Policy Making + Astrological Forecasting
Beth, the curious researcher, was playing around with ChatGPT one morning last week, kicking the tires on possibilities. I, jokingly, asked her if ChatGPT could write me a policy on ChatGPT and AI, since that was on my to-do list for the week. ChatGPT did not write me a policy that I would ever actually use. But it did give me a draft policy that caused enough red flags to appear for me and why we need to be cautious about strengths and weaknesses of AI as we use them to create work decisions and write work content.
This was the response: (aka The Worst Workplace Policy Ever)
“Policy for Employees to Use ChatGPT at Work that Follows Privacy Best Practices:
1 Authorized Use:
Employees are authorized to use ChatGPT only for work-related purposes. Personal use is prohibited.
2 Privacy:
Employees must respect the privacy of others and must not use ChatGPT to access, view, or share any confidential, sensitive, or personal data belonging to colleagues, customers, or partners.
3 Data Protection:
Employees must adhere to GDPR regulations and must ensure that personal data is collected and processed only for legitimate and authorized purposes. They must not collect or process personal data that is not necessary for their work responsibilities.
4 Security:
Employees must protect ChatGPT login credentials, keep them confidential, and use secure passwords. They must also ensure that any devices used to access ChatGPT are secure and free from malware or viruses.
5 Record Keeping:
Employees must maintain appropriate records of their ChatGPT interactions, including any personal data processed, the purpose of processing, and any associated consent obtained.
6 Third-Party Use:
Employees must obtain the appropriate approvals before sharing or allowing third parties to access any ChatGPT conversations or personal data.
7 Reporting:
Employees must report any suspected or actual data breaches, unauthorized access, or any other privacy incidents involving ChatGPT to the company's data protection officer or the appropriate authorities, as required by GDPR regulations.
8 Training:
Employees must receive adequate training on GDPR regulations, ChatGPT usage, and privacy best practices. They must also attend regular refresher training to ensure that they stay up to date with any changes in the policy or GDPR regulations.
By following these best practices, employees can use ChatGPT in a privacy-conscious manner that respects the rights and privacy of others, while also complying with GDPR regulations.”
Let's break this down line by line -
“1 Authorized Use:
Employees are authorized to use ChatGPT only for work-related purposes. Personal use is prohibited.”
I found it interesting that personal use was prohibited when having it work the other way around would make more sense. Reading ChatGPT’s usage guidelines, it is in research mode and every conversation can be read — meaning if someone puts confidential information into ChatGPT, you are breaking company – and likely privacy laws. (aka worse than Cambridge Analytica.)
“2 Privacy:
Employees must respect the privacy of others and must not use ChatGPT to access, view, or share any confidential, sensitive, or personal data belonging to colleagues, customers, or partners.”
This is a very ambiguous statement as it alludes to the access that ChatGPT has along with putting the burden of understanding data classification on users. This is difficult as not all users can understand how an aggregation of data can become protected data.
We found blog posts of several people using ChatGPT on internal work information, which is not okay. We don’t know how that information is being used or stored at all. This is Cambridge Analytica scary.
What happens if protected data is entered in ChatGPT? Is there a process to get that data removed?
What happens if I accidentally type in personally identifiable information? What does ChatGPT do with my PII?
“3 Data Protection:
Employees must adhere to GDPR regulations and must ensure that personal data is collected and processed only for legitimate and authorized purposes. They must not collect or process personal data that is not necessary for their work responsibilities.”
Again, this is blanket GDPR best practices – -not one that relates to ChatGPT. ChatGPT also doesn’t seem to have awareness here that using it as a work tool might not be a good idea because of its own usage policies. A better policy would be ChatGPT is allowed for only externally available data, like the stuff found on the company websites.
“4 Security:
Employees must protect ChatGPT login credentials, keep them confidential, and use secure passwords. They must also ensure that any devices used to access ChatGPT are secure and free from malware or viruses.”
This is an interesting statement as ChatGPT is only attempting to keep its own information secure and protected. Not even mentioning security best practices like MFA.
“5 Record Keeping:
Employees must maintain appropriate records of their ChatGPT interactions, including any personal data processed, the purpose of processing, and any associated consent obtained.”
My first thought reading this was that ChatGPT wants to ensure you have all the correct information for your lawyer. There are so many lawsuit scenarios here.
“6 Third-Party Use:
Employees must obtain the appropriate approvals before sharing or allowing third parties to access any ChatGPT conversations or personal data.”
Does ChatGPT not understand that it is a third party in this situation? This may be its only really sound policy recommendation — don’t use ChatGPT without approvals. No one knows what it happening with your data.
“7 Reporting:
Employees must report any suspected or actual data breaches, unauthorized access, or any other privacy incidents involving ChatGPT to the company's data protection officer or the appropriate authorities, as required by GDPR regulations.”
I do not think the average employee would know how to identify a suspected or actual breach. This idea is good, the execution is poor. This is probably one of the only statements that is actually useful to employees as it does state that privacy incidents need to be reported. With no “you can’t upload this” policy for ChatGPT, they really are putting a lot of risk — all of the risk onto the user. It’s not just a search engine, but it would be really easy for people to think the rules for a simple Google search search and using ChatGPT were the same. They are very, very different scenarios.
“8 Training:
Employees must receive adequate training on GDPR regulations, ChatGPT usage, and privacy best practices. They must also attend regular refresher training to ensure that they stay up to date with any changes in the policy or GDPR regulations.
By following these best practices, employees can use ChatGPT in a privacy-conscious manner that respects the rights and privacy of others, while also complying with GDPR regulations.”
I think the only thing that is really being protected here is ChatGPT and not the other way around. Users are not responsible for ChatGPT as they are responsible for their own actions when using AI or any other application. If I gave this to my company, there would be tons of emails and DMs asking questions trying to understand what they can and cannot do.
The scary thing to me about ChatGPT is that it is nearly impossible to get it to cite sources. It’s gathering and collating information, but not giving a traceable root to where it came from, or how. In research, we sometimes talk about “dirty data sets.” These are sometimes messy bits of data culled from multiple sources that can give you an idea about something, but we wouldn’t use these data sets to make critical decisions because we can’t really trace the sources or go back and retrace steps, add new data, and update assumptions that may have been incorrect.
I do like ChatGPT for getting a general idea from one particular source about something. Beth asked ChatGPT to make a list of 50 possible types of Target customers. That list was great because it helped Beth brainstorm Target customer ideas. Where it started to break was when Beth asked ChatGPT about Target’s actual top customer groups. It couldn’t verify data sources on its prioritization. Beth couldn’t check its work.
In a lot of ways, this feels like consulting an astrologer. There are patterns. They might be real, but I have no way of verifying them. As it stands, I would want a policy where we weren’t making business decisions off information pulled from ChatGPT, unless you happened to get a credible, traceable source list.
I also never in a million years would put any sensitive company information into ChatGPT because we don’t know what they are doing with my data. And the terms and conditions state they can use it for research purposes. What does that mean? Are they using it to train the neural net? Is it encrypted? How is it stored? We just don’t know.
Also, ChatGPT is a data time machine. It, reportedly, hasn’t been updated since September 2021, so there’s a lot of information it just doesn’t have. I would want that to be flagged and labeled. How many people have been getting advice and making business decisions in the past few weeks off old data? And, in many cases, data of unknown source, quality or freshness?
All of ChatGPT’s marketing material tells us that it is here to make safe, open AI – but is it? We really don’t know.