Data Privacy and Cybersecurity
This section provides a high-level overview of key considerations, including understanding critical data definitions (e.g., anonymized versus de-identified data), addressing emerging threats like deep fakes, and practical strategies for updating privacy policies to account for AI. For a more in-depth exploration of these topics and actionable resources, we encourage readers to review the materials provided by your subgroup in our Appendix.
Learn what Task Force Leaders say about Data Privacy and Cybersecurity
🌴 Florida Spotlight
Several districts in Florida are part of the A4L National Student Data Privacy Consortium and use a standard Data Privacy Agreement. This could be a great resource for your district as well! Contact the Task Force to learn more.
In addition, Several districts in Florida participate in the 1EdTech Vetting Workstream, which streamlines the approval process for edtech tools to ensure they meet data privacy, security, and interoperability standards. Through the TrustEd Apps Program, 1EdTech vets hundreds of edtech companies using an open standard privacy rubric, helping institutions and suppliers save time and effort in evaluating data privacy compliance.
Developed by the UF Lastinger Center for Learning in partnership with the AIMS Collaboratory, the Guidance and Agreement for Data Harmony, Responsibility, Retention, and Sharing (GADRRS) universal data sharing agreement is a free resource intended to serve as a standard agreement for schools, districts, researchers and solutions providers.
Key Terminology
Term | Definition | Example |
---|---|---|
Data Privacy | The practice of ensuring that personal information is collected, stored, and shared in a manner that protects individuals’ identities and prevents misuse. | Ensuring student data is only accessible to authorized personnel and not used for unauthorized purposes. |
Data Security | Measures and protocols designed to protect digital data from unauthorized access, corruption, or theft. | Using encryption and secure networks to safeguard student records from cyberattacks. |
Personally Identifiable Information (PII) | Information that can directly or indirectly identify a specific individual, such as names, addresses, social security numbers, or student ID numbers. | Student attendance records containing names and school IDs. |
Anonymized Data | Data that has been stripped of all personally identifiable information (PII), making it impossible to trace back to an individual. | Aggregating student test scores without linking them to names or IDs. |
Intellectual Property (IP) |
Information and works, such as copyrighted and trademarked materials, music, literature and artistic works, and private business documents that are restricted for use beyond their owners. |
Example: Awareness to avoid inappropriate use of IP, including allowing AI engines to use for training purposes. |
De-Identified Data | Data from which PII has been removed or masked, but which could still be linked back to an individual with additional information. | De-identified data is less secure than anonymized data because re-identification is possible under certain conditions. |
Data Governance | The framework of policies, procedures, and standards that ensure effective and ethical management of data. | District-wide rules on who can access student performance data and for what purpose. |
Data Modernization | Upgrading legacy systems and processes to handle, store, and analyze data more efficiently and securely using modern technologies. | Transitioning from paper-based student records to a secure cloud-based platform. |
Algorithmic Bias | Systematic errors in AI outputs that result from biases in the data or design of the algorithm. | AI recommending fewer advanced math opportunities for certain student demographics due to historical inequities in data. |
Data Breach | Unauthorized access or exposure of sensitive information. | A cyberattack leading to the leak of students’ PII from a school database. |
Data Sharing Agreement (DSA) |
Legal contract outlining the terms and conditions for sharing data between parties. |
Example: Company A and Company B entered into a DSA to share Student Lunch data for a joint marketing analysis. |
Data Minimization | The principle of collecting only the data necessary for a specific purpose and nothing more. | Only collecting attendance data needed for funding reports, without storing additional unrelated information. |
Consent Management | Processes that ensure individuals or their guardians have given informed consent for data collection and usage. | Gaining parent approval before using a new AI-based classroom app that collects student data. |
Deep Fakes
Deep fakes—synthetic media generated using AI to manipulate images, videos, or audio—have emerged as a significant concern. Deep fakes can be used to create content that appears authentic but is entirely fabricated, posing risks for misinformation, cyberbullying, and even security threats. For Florida’s schools, addressing deep fakes is essential to fostering a safe, informed, and ethical learning environment.
Why Deep Fakes Matter
- Impact on Students and Staff: Deep fakes can be weaponized to target students or staff, potentially leading to cyberbullying, reputational harm, or psychological distress.
- Spread of Misinformation: Schools play a vital role in promoting media literacy. Without proper education on deep fakes, students may struggle to discern credible sources from manipulated content, impacting their ability to make informed decisions.
- Security Risks: Deep fakes could be used to impersonate school administrators or staff in phishing scams, jeopardizing data security and trust.
Approaches for Mitigation
- Promote Media Literacy: Update lessons and standards to include recognizing and evaluating deep fakes, including: teach students to analyze inconsistencies in audio or video and discuss the role of algorithms in creating and detecting manipulated media.
- Strengthening Digital Citizenship Programs: Expand Florida’s digital citizenship curriculum to include ethical AI use and the risks associated with manipulated media equipping students with skills to assess digital content effectively.
- Leveraging AI Detection Tools: Districts can explore partnerships with technology companies to access AI tools that detect deep fakes and should educate IT staff to use these tools, recognize red flags, and mitigate potential issues.
- Policy Alignment: Schools should adapt existing Acceptable Use Policies (AUPs) to explicitly prohibit the creation, sharing, or distribution of deep fakes.
- Crisis Response Planning: Recognizing the growing risks of data leakage and deep fakes, schools must enhance their Cybersecurity Incident Response Planning (CSIRP). Updating these plans to include specific protocols for these threats will ensure a timely and effective response when incidents occur.
01
Raise Awareness
Host workshops or assemblies for students, teachers, and parents to understand the risks of deep fakes and learn how to identify them.