International Journal of Applied and Behavioral Sciences (IJABS)

Evaluating Data Security Measures in AI-Enhanced Educational Tools in Nigerian Universities

Abstract

This study evaluates data security measures in AI-enhanced educational tools within Nigerian universities, focusing on the perceptions of university lecturers. A sample of 379 lecturers was surveyed using a structured questionnaire titled “Lecturers’ Perceptions on Data Security Measures in AI-Enhanced Educational Tools in Nigerian Universities” (LPDSME). The reliability of the instrument was established with a Cronbach’s alpha coefficient of 0.87, indicating strong internal consistency. Statistical analysis was conducted using mean and standard deviation to address five key research questions, including the level of awareness regarding data security risks and the perceived effectiveness of current measures. Findings revealed that while lecturers demonstrated a high level of awareness regarding data security risks, they perceived existing data security measures as inadequate, with several items rejected in effectiveness assessments. Recommendations include prioritizing training for lecturers on data security protocols and enhancing collaboration between IT departments and academic staff to ensure comprehensive security practices. Additionally, universities need to invest in updated technologies and frameworks to bolster data protection in AI-enhanced educational environments.

Keywords:Evaluation, Data security, AI Educational tools, Nigerian Universities

Introduction and context of the study overview

In recent years, the integration of Artificial Intelligence (AI) in education has gained remarkable momentum, fundamentally transforming traditional teaching and learning methodologies. AI-enhanced educational tools, such as personalized learning platforms, intelligent tutoring systems, and automated grading systems, are now widely adopted to improve academic outcomes (Luckin, Holmes, Griffiths & Forcier, 2016). These innovative tools leverage data analytics to adapt to individual learning styles, offer immediate feedback, and create a more engaging learning environment (Holmes et al., 2019). For example, platforms like Coursera and Khan Academy utilize AI algorithms to recommend courses and tailor content based on user behavior, exemplifying a significant shift toward more customized educational experiences (Brynjolfsson & McAfee, 2014). This evolution underscores the increasing digitization of education and highlights the critical importance of data security in AI-enhanced educational tools.

As these systems handle sensitive student information—including personal details, academic records, and behavioral data—the risks associated with data breaches are heightened. Such breaches can lead to unauthorized access and misuse of this information, ultimately jeopardizing student privacy and institutional integrity (Huang, Liu, & Wu, 2021). Moreover, as educational institutions increasingly adopt AI tools, they become attractive targets for cyber attacks, necessitating robust data security measures to safeguard against potential threats (Kumar, Singh & Gupta, 2022). Thus, ensuring data security is not only vital for compliance with legal standards, such as the General Data Protection Regulation (GDPR) but also essential for maintaining trust among students and parents. In the context of Nigeria, the adoption of AI in universities remains in its early stages compared to global trends. However, there is a burgeoning recognition of AI’s potential to address educational challenges, such as inadequate resources and high student-to-teacher ratios (Ogunleye, 2020). The Nigerian higher education system is marked by its diversity and complexity, presenting unique opportunities and challenges for the implementation of AI tools. While AI has the potential to enhance learning experiences and outcomes significantly, this context raises pressing concerns regarding infrastructure, digital literacy, and data security practices (Adedokun, Ojo, & Ogunleye,2021). Consequently, understanding the specific data security needs and challenges within Nigerian universities is crucial for effectively integrating AI technologies.

Objective of the Study

The primary objective of this study is to evaluate the current data security measures employed in AI-enhanced educational tools within Nigerian universities. This evaluation aims to assess both the effectiveness and challenges of these measures in safeguarding sensitive student data. By identifying gaps and best practices, the study seeks to provide actionable recommendations for policymakers and educational institutions to enhance data security protocols. Ultimately, this research aspires to contribute to the broader discourse on educational technology in Nigeria, ensuring that the integration of AI not only enhances learning outcomes but also protects student privacy and data integrity.

1. Concepts of AI in Education

The growing integration of Artificial Intelligence (AI) in education has fundamentally transformed how teaching and learning occur. AI-enhanced educational tools are becoming increasingly prevalent, offering diverse functionalities aimed at improving academic outcomes (Luckin et al., 2016). These tools encompass digital technologies that utilize AI algorithms and machine learning techniques to personalize learning experiences, automate tasks, and provide intelligent support to both students and educators. A key component of AI-enhanced educational tools is the application of data analytics and personalization algorithms. These systems collect and analyze vast amounts of student data, including academic performance, learning preferences, and engagement patterns, to tailor content delivery to individual needs (Holmes et al., 2019). For instance, intelligent tutoring systems can offer real-time feedback, adjust the difficulty of the material, and recommend personalized learning paths based on a student’s progress (Huang et al., 2021). Similarly, automated grading systems assess student work and provide detailed feedback, allowing instructors to dedicate more time to meaningful interactions with their students (Kumar et al., 2022).

In addition to data analytics, the use of natural language processing (NLP) and conversational interfaces is another significant aspect of AI-enhanced educational tools. These technologies enable the development of virtual assistants and chatbots capable of engaging in natural language interactions with students, answering questions, and providing guidance on various academic tasks (Brynjolfsson & McAfee, 2014). This personalized support fosters more engaging interactions, enhancing the overall learning experience. As the adoption of AI-enhanced educational tools continues to expand, it is crucial to recognize both their potential benefits and the challenges they present. These tools can revolutionize the educational landscape by improving learning outcomes, increasing accessibility, and streamlining administrative processes. However, integrating AI into education also raises significant concerns regarding data security and privacy, especially since these tools often handle sensitive student information (Zhang et al., 2020). Understanding the nuances of AI-enhanced educational tools and their implications is essential for educators, policymakers, and researchers alike.

While the integration of AI in the education sector offers notable advantages, it also brings significant challenges that must be addressed. One of the primary benefits of AI is its ability to personalize the learning experience. AI-powered systems can analyze student data to tailor content and delivery to individual needs, leading to improved learning outcomes by providing targeted support and resources that align with students’ unique strengths and weaknesses (Luckin et al., 2016). For example, intelligent tutoring systems can deliver real-time feedback, adapt to the learner’s pace, and suggest personalized learning paths, thereby enhancing the educational experience (Huang et al., 2021).

Moreover, AI can enhance administrative efficiency within educational institutions. Automated grading systems can quickly assess student work and provide timely feedback, enabling instructors to focus on higher-level teaching and mentoring tasks (Kumar et al., 2022). Additionally, AI-powered virtual assistants and chatbots offer on-demand support, answering questions and guiding students through various academic tasks (Brynjolfsson & McAfee, 2014). This personalized assistance can significantly boost student engagement and contribute to overall academic success.

Despite these advantages, significant challenges persist, particularly concerning data security and privacy. AI-enhanced educational tools often manage sensitive student information, including personal details, academic records, and behavioral data (Zhang, Wang, & Chen, 2020). Protecting this data is paramount, as breaches can lead to unauthorized access, misuse, and a loss of trust in educational institutions (Huang et al., 2021). Additionally, the potential for biases and algorithmic discrimination within AI systems raises concerns. If the data used to train these systems contains inherent biases, it can lead to unfair or inaccurate outcomes for certain student populations (Rienties & Toetenel, 2016). This issue can exacerbate existing educational inequalities and undermine the goal of providing equitable learning opportunities.

Furthermore, implementing AI in education often requires substantial investments in infrastructure, technology, and staff training. Many educational institutions, particularly in developing countries like Nigeria, may face financial and resource constraints that hinder their ability to adopt and maintain AI-enhanced tools effectively (Adedokun et al., 2021). This challenge can widen the digital divide and further marginalize underserved communities. As the adoption of AI in education continues to grow, policymakers, educators, and technology developers need to collaborate in addressing these challenges and ensuring the responsible and ethical use of AI in the educational domain.

2. Data Security in AI Systems

In the realm of Artificial Intelligence (AI), data security is of utmost importance, especially as these systems manage large volumes of sensitive information. The foundational principles of data security—confidentiality, integrity, and availability—serve as essential guidelines for safeguarding data within AI systems. Confidentiality ensures that sensitive information is accessible only to authorized individuals. In the context of AI, this principle is crucial because these systems often process personal data, including student records, health information, and financial details (Huang et al., 2021). Breaches of confidentiality can result in severe consequences, such as identity theft and unauthorized access to personal information. Techniques like encryption, access controls, and anonymization are commonly employed to protect confidential data. For example, end-to-end encryption can secure data during transmission, ensuring that only intended recipients can access the information (Kumar et al., 2022).

Integrity pertains to the accuracy and consistency of data throughout its lifecycle. In AI systems, maintaining data integrity is vital to ensure that algorithms operate correctly and yield reliable outcomes. If data is altered or corrupted, it can lead to incorrect predictions or decisions, undermining trust in AI applications (Zhang et al., 2020). Mechanisms such as checksums and hash functions help verify data integrity, enabling systems to detect unauthorized modifications. In educational contexts, ensuring that student grades are accurate and un-tampered is essential for maintaining trust and fairness in assessments (Holmes et al., 2019).

Availability ensures that data and services are accessible when needed. This principle is particularly important for AI applications that provide real-time responses, such as intelligent tutoring systems and chatbots. Downtime or data unavailability can disrupt learning experiences and erode user trust (Luckin et al., 2016). To enhance availability, organizations implement strategies like redundancy, load balancing, and disaster recovery plans. For instance, cloud-based AI services often utilize multiple servers to ensure continuous access, minimizing the risk of service interruptions (Adedokun et al., 2021).

2a. Unique Data Security Concerns in AI-Enhanced Systems

Integrating Artificial Intelligence (AI) into educational tools introduces a range of unique data security concerns that significantly differ from traditional systems. As educational institutions increasingly rely on AI to enhance learning experiences, it becomes essential to understand these challenges to effectively safeguard sensitive information. AI-enhanced systems often process vast amounts of sensitive data, including personal, academic, and behavioral information about students. This heightened sensitivity raises the stakes for data breaches, as unauthorized access can lead to identity theft or the misuse of personal information (Zhang et al., 2020). For example, incidents of data leaks within educational settings can erode trust and deter students from engaging with AI tools designed to support their learning (Huang et al., 2021). Thus, protecting this data is paramount, necessitating robust security measures tailored specifically for AI environments.

Moreover, the complexity of AI algorithms can create vulnerabilities that are not present in simpler systems. These algorithms often rely on machine learning models that learn from historical data, making them susceptible to adversarial attacks, where malicious actors manipulate input data to produce incorrect outputs (Goodfellow et al., 2014). For instance, an attacker might alter student performance data to skew the recommendations generated by an intelligent tutoring system, leading to inappropriate educational interventions. Such vulnerabilities underscore the need for ongoing monitoring and evaluation of AI systems to ensure their security against evolving threats.

In addition to these technical vulnerabilities, another significant concern is the potential for algorithmic bias within AI systems. If the data used to train these systems contains inherent biases, AI can perpetuate and even exacerbate existing disparities in educational settings (Rienties & Toetenel, 2016). Biased algorithms might unfairly disadvantage certain student groups in assessments or resource allocations, resulting in inequitable educational outcomes. This risk not only raises ethical challenges but also complicates data security, as institutions must ensure that their AI tools promote fairness while adequately protecting sensitive data.

Furthermore, AI-enhanced educational tools must navigate various data protection regulations, such as the General Data Protection Regulation (GDPR) in Europe and the Family Educational Rights and Privacy Act (FERPA) in the United States. For institutions adopting AI technologies, aligning data practices with legal requirements can pose significant challenges (Kumar et al., 2022). Non-compliance not only risks severe penalties but also threatens institutional reputation, thereby emphasizing the need for comprehensive data governance frameworks. Additionally, many educational institutions utilize third-party AI solutions, which can introduce further data security risks. When sensitive data is shared with external vendors, the institution’s control over data security diminishes. Issues such as inadequate security measures by third-party providers or ambiguous data ownership agreements can create vulnerabilities (Adedokun et al., 2021). Consequently, institutions must perform due diligence when selecting AI vendors and establish clear data protection agreements to mitigate these risks.

3. Relevant Theories and Models

The integration of AI in education also brings to light several theoretical frameworks that can help understand and enhance data security and user acceptance. Three relevant models in this context are the Information Systems Security Management (ISSM) framework, the Technology Acceptance Model (TAM), and the Socio-Technical Systems Theory. Each of these frameworks offers valuable insights into the challenges and opportunities associated with AI-enhanced educational tools.

A. Information Systems Security Management (ISSM) Framework

The Information Systems Security Management (ISSM) framework provides a structured approach to managing security in information systems, with a focus on protecting sensitive data from various threats. This framework emphasizes the importance of policies, procedures, and controls to ensure data confidentiality, integrity, and availability (Whitman & Mattord, 2016). In the context of AI in education, the ISSM framework is particularly relevant as educational institutions adopt tools that process large amounts of student data. Implementing robust security measures is crucial to mitigate risks associated with data breaches and cyber-attacks. For instance, the framework advocates for continuous risk assessment and monitoring, which are essential for identifying vulnerabilities in AI systems (Zhang et al., 2020). By incorporating the ISSM framework, institutions can develop comprehensive security strategies that align with their unique educational contexts and regulatory requirements.

B. Technology Acceptance Model (TAM)

The Technology Acceptance Model (TAM) is a widely recognized framework that explains how users come to accept and utilize technology. According to TAM, perceived ease of use and perceived usefulness are critical factors influencing user acceptance (Davis, Venkatesh, & Morris 2018). In the realm of AI-enhanced educational tools, understanding user acceptance is vital for successful implementation. If educators and students perceive AI tools as difficult to use or ineffective, they are less likely to engage with them, which undermines the potential benefits. For example, when educational institutions introduce AI-driven platforms, it is essential to ensure that users receive adequate training and support to enhance their confidence in using these technologies (Davis et al., 2018). By applying the TAM, institutions can focus on improving user experience and addressing any resistance to adopting new technologies, fostering a more effective learning environment.

C. Socio-Technical Systems Theory

Socio-Technical Systems Theory emphasizes the interdependence between the social and technical aspects of an organization. This theory posits that successful technology implementation requires not only a focus on the technical system but also an understanding of the social dynamics at play (Trist, 1981). In the context of AI in education, this means recognizing the roles of educators, students, and administrators in shaping the effectiveness of AI tools. For instance, integrating AI into the classroom involves more than just deploying technology; it requires cultivating a supportive culture that encourages collaboration and communication among stakeholders (Kumar et al., 2022). By applying the Socio-Technical Systems Theory, educational institutions can design AI systems that align with the needs and values of their users, fostering a more holistic approach to technology adoption.

4. AI-Enhanced Educational Tools in Higher Education

The adoption of Artificial Intelligence (AI) in higher education has gained significant momentum worldwide, fundamentally transforming teaching and learning methodologies. As universities increasingly integrate AI-enhanced educational tools, they aim to improve student engagement, and personalized learning experiences, and optimize administrative processes. This trend is evident across various regions, with institutions in North America and Europe leading the way in AI adoption. A study by Holmes et al. (2019) highlights that universities are utilizing AI for adaptive learning systems, which tailor educational content to meet individual student needs, thereby enhancing the overall learning experience. These systems analyze a range of student data, including performance and engagement metrics, to provide personalized feedback and recommendations. In Asia, countries like China and India are also rapidly incorporating AI into their educational frameworks. Research indicates that Chinese universities are implementing AI-driven platforms for administrative tasks, such as admissions and grading, streamlining processes, and reducing the workload for educators (Zhang et al., 2020). Similarly, Indian institutions are exploring AI for language processing tools that assist in teaching students from diverse linguistic backgrounds, thereby promoting inclusivity in education (Adedokun et al., 2021).

The impact of AI on teaching and learning in higher education is profound. AI-enhanced tools facilitate more interactive and engaging learning environments. For example, intelligent tutoring systems offer real-time support to students, adapting content and difficulty based on their performance (Huang et al., 2021). This kind of personalized learning not only improves academic outcomes but also fosters student motivation and engagement. Moreover, AI tools such as chatbots and virtual assistants are becoming commonplace in university settings. These tools provide immediate assistance to students, addressing queries related to course material, administrative processes, and even mental health resources. By offering 24/7 support, these AI applications enhance the overall student experience, making education more accessible (Brynjolfsson & McAfee, 2014). However, despite the numerous benefits, the adoption of AI in higher education is not without challenges. Concerns surrounding data privacy and security are paramount, as these systems often handle sensitive student information (Zhang et al., 2020). The integration of AI into university education has led to the development of various innovative tools and applications aimed at enhancing learning experiences. While these AI-based solutions offer numerous benefits, they also present specific challenges that institutions must address.

4i. Perceived Benefits of AI in University Education

A. Personalized Learning Experience One of the most significant advantages of AI in education is its ability to provide personalized learning experiences. AI-driven platforms can analyze student data to tailor educational content according to individual learning styles and paces. For instance, adaptive learning systems like Smart Sparrow or Knewton adjust task difficulty based on students’ performance, ensuring that learners receive appropriate challenges and support (Holmes et al., 2019). This personalization not only enhances engagement but also improves academic performance.

B. Enhanced Administrative Efficiency

AI tools can automate various administrative tasks, significantly reducing the burden on faculty and staff. Functions such as grading, attendance tracking, and scheduling can be streamlined through AI applications, allowing educators to devote more time to teaching and mentoring students (Kumar et al., 2022). For example, AI-based grading systems can provide quick feedback on assignments, helping students understand their strengths and weaknesses without delay.

Improved Student Support

AI-powered chatbots and virtual assistants are increasingly being used in universities to provide real-time support to students. These tools can answer common queries, assist with course selections, and even provide academic guidance. This immediate access to information enhances the student experience and fosters a supportive learning environment (Brynjolfsson & McAfee, 2014).

4ii. Challenges of AI in University Education

Data Privacy and Security Concerns While AI applications rely on large datasets to function effectively, this reliance raises significant concerns regarding data privacy and security. Educational institutions must ensure that sensitive student information is protected from unauthorized access and breaches. The risk of data misuse can undermine trust in AI systems, making it essential for universities to implement stringent data protection measures (Zhang et al., 2020; Huang et al., 2021).

A. Equity and Access Issues

The reliance on AI tools can exacerbate existing inequalities in education. Not all students have equal access to technology or the internet, which can lead to disparities in learning opportunities. Students from underprivileged backgrounds may find themselves at a disadvantage if AI-enhanced resources are not universally accessible (Adedokun et al., 2021). Educational institutions need to address these equity issues to ensure that all students benefit from AI advancements.

B. Resistance to Change

The adoption of AI in education may face resistance from faculty and students who are accustomed to traditional teaching methods. Concerns about the effectiveness of AI tools and the potential loss of the human element in education can hinder acceptance (Luckin et al., 2016). This resistance highlights the need for institutions to foster an environment that encourages the exploration and acceptance of AI technologies while addressing any apprehensions.

4iii. Data Security Concerns in AI-Enhanced Educational Tools

A. Sensitive Nature of Student Data in Educational Systems

The integration of AI in educational systems raises significant concerns regarding the sensitive nature of student data. Educational institutions collect vast amounts of personal information, including academic records, health data, and behavioral patterns. While this data is crucial for personalizing learning experiences and improving educational outcomes, it also poses substantial privacy risks. According to Huang et al. (2021), breaches of sensitive information can lead to identity theft, misuse of personal data, and a loss of trust in educational institutions. Additionally, the collection of data often includes information about minors, necessitating stricter compliance with regulations such as the Family Educational Rights and Privacy Act (FERPA) in the United States and the General Data Protection Regulation (GDPR) in Europe (Zhang et al., 2020).

B. Vulnerabilities and Threats to Data Security in AI-Based Tools

AI-based educational tools are not immune to various vulnerabilities and threats that can compromise data security. One primary concern is the potential for adversarial attacks, where malicious actors manipulate input data to exploit weaknesses in AI algorithms (Goodfellow et al., 2014). For instance, an attacker could alter learning data to skew assessments or recommendations, leading to inappropriate educational interventions. Furthermore, the complexity of AI systems can introduce security vulnerabilities that are challenging to detect. These systems rely on large datasets for training, which may inadvertently include biased or incomplete information, complicating the security landscape (Rienties & Toetenel, 2016). Additionally, the use of third-party vendors for AI solutions exposes institutions to further risks, as data security practices can vary significantly between providers, increasing the potential for breaches (Adedokun et al., 2021).

C. Existing Data Security Measures and Their Effectiveness

To address these data security concerns, educational institutions have implemented several measures aimed at protecting sensitive information. Common strategies include encryption, access controls, and regular security audits. Encryption techniques are vital for safeguarding data both at rest and in transit, ensuring that unauthorized parties cannot access sensitive information (Kumar et al., 2022). Access controls limit who can view or modify data, providing an additional layer of security. However, the effectiveness of these measures can vary. While encryption and access controls are essential, they must be part of a comprehensive security framework that includes continuous monitoring and risk assessment (Whitman & Mattord, 2016). This comprehensive approach helps institutions stay ahead of potential threats.

Factors Influencing Data Security in AI-Enhanced Educational Tools

A. Technological Factors (e.g., System Architecture, Algorithms)

The effectiveness of data security measures in AI-enhanced educational tools is heavily influenced by technological factors, particularly system architecture and the algorithms used. Understanding these elements is crucial for developing robust security frameworks that protect sensitive student data.

System Architecture

The architecture of an AI system fundamentally impacts its security. A well-structured system architecture can significantly mitigate vulnerabilities. For instance, a multi-layered architecture that separates data storage, processing, and user interfaces helps protect sensitive data from unauthorized access (Whitman & Mattord, 2016). This separation creates barriers that make it more challenging for potential attackers to breach the system. Moreover, the choice between on-premises and cloud-based solutions introduces different security challenges. While cloud services offer scalability and flexibility, they also require stringent security practices to safeguard data stored off-site. Institutions must ensure that their cloud service providers comply with relevant security standards and regulations (Kumar et al., 2022). Regular security audits of cloud infrastructure are essential for identifying and addressing potential vulnerabilities, as highlighted by Zhang et al. (2020).

Algorithms

The algorithms underlying AI systems also play a critical role in data security. Machine learning models can be particularly susceptible to adversarial attacks, where malicious actors manipulate input data to deceive the algorithm (Goodfellow et al., 2014). For example, if an attacker alters student performance data, they could skew the AI’s recommendations, leading to inappropriate educational interventions. Additionally, the choice of algorithms affects the system’s transparency and interpretability. More transparent algorithms allow for better monitoring and auditing, enabling educators and administrators to quickly identify anomalies that may indicate security breaches (Huang et al., 2021). Institutions should prioritize the use of explainable AI algorithms that not only deliver accurate results but also provide insights into their decision-making processes.

5. Integration of Security Measures

To effectively enhance data security, educational institutions must integrate security measures into their technological frameworks from the outset. This includes employing encryption for data both at rest and in transit, as well as implementing robust access controls that restrict user permissions based on their roles (Kumar et al., 2022). Continuous monitoring and threat detection systems can identify and respond to security incidents in real time, further strengthening the institution’s overall security posture.

Gaps and Challenges in Ensuring Data Security

A. Limitations of Current Data Security Measures

Despite the implementation of various data security measures, limitations persist. Many universities struggle with outdated technologies that may hinder the effectiveness of security protocols. Additionally, some security measures may not be fully integrated, leading to gaps in protection (Kumar et al., 2022). For instance, while encryption may be employed, weak access controls could still allow unauthorized users to access sensitive data, undermining the overall security framework.

B. Organizational and Technological Barriers to Effective Implementation

Organizational barriers, such as a lack of clear policies or insufficient funding, can impede the effective implementation of data security measures. In some cases, there may be a disconnect between IT departments and faculty, leading to miscommunication regarding security needs and practices (Huang et al., 2021). Moreover, technological barriers arise from the rapid pace of technological change, making it challenging for universities to keep up with the latest security technologies and best practices (Zhang et al., 2020).

C. User-Related Factors Contributing to Data Security Vulnerabilities

User-related factors, such as lack of awareness and improper behavior, significantly contribute to data security vulnerabilities. Even with robust policies in place, users may inadvertently expose sensitive information through poor password practices or failure to recognize phishing attempts (Adedokun et al., 2021). Additionally, varying levels of technical skills among users can lead to inconsistent adherence to security protocols. Institutions must address these human factors by providing comprehensive training and fostering a culture of accountability regarding data security. Though universities have established policies and technical measures to ensure data security, significant gaps and challenges remain. Limitations in current measures, organizational and technological barriers, and user-related vulnerabilities need to be addressed to strengthen data protection in AI-enhanced educational tools. This study aims to evaluate data security measures in AI-enhanced educational tools within Nigerian universities, shedding light on these critical issues.

Gaps in the Literature:

There is limited research on the data security measures employed in AI-enhanced educational tools within the Nigerian university context. The document notes that “the adoption of AI in universities in Nigeria is still in its nascent stages compared to global trends”, suggesting a lack of existing research in this specific context. Secondly, understanding the effectiveness and challenges of the current data security measures being implemented by Nigerian universities in their AI-enhanced educational tools is quite unclear to many university lecturers. In addition, there is a lack of actionable recommendations for policymakers and educational institutions in Nigeria to enhance data security protocols for AI-enhanced educational tools. The document states that the study aims to “provide actionable recommendations for policymakers and educational institutions to improve data security protocols”. By addressing these gaps in the literature, the study seeks to contribute to the broader discourse on educational technology in Nigeria and inform the development of data security strategies for AI-enhanced educational tools in the Nigerian higher education context.

Research Questions

1. What is the level of awareness among lecturers regarding data security risks associated with AI-enhanced educational tools?

2. How effective do lecturers perceive current data security measures implemented in AI-enhanced educational tools?

3. What is the perceived impact of data privacy regulations (e.g., FERPA, GDPR) on the implementation of data security measures in AI-enhanced educational tools?

4. How do lecturers view the role of training and support in enhancing data security practices related to AI tools?

5. What are the lecturers’ perceptions of the gaps and challenges in ensuring data security in AI-enhanced educational tools in Nigerian universities?

Research Methodology

This study utilized a descriptive survey methodology to evaluate the data security measures in AI-enhanced educational tools within Nigerian universities. The target population comprised university lecturers who utilize these AI tools in their teaching and administrative practices, this population was accessed during a national capacity building on AI-enhanced educational tools for Nigerian University lecturers. A purposive sampling approach was employed to select participants from various universities across Nigeria, ensuring a diverse representation of experiences and perspectives. A total of 379 lecturers (154 males and 225 females) were selected to participate in the study, focusing on those who had experience with AI-enhanced educational tools in their institutions.

The data collection instrument was a structured questionnaire titled “Lecturers’ Perceptions on Data Security Measures in AI-Enhanced Educational Tools in Nigerian Universities” (LPDSME). This 25-item questionnaire was designed by the researchers and validated by experts from the fields of educational technology and data security, incorporating their feedback into the final version. The questionnaire was divided into five sections: Section A focused on demographic information. Section B centered on lecturers’ awareness of data security risks associated with AI tools. Section C assessed the perceived effectiveness of current data security measures. Section D explored the impact of data privacy regulations on security practices. Section E investigated the challenges faced in implementing effective data security measures. Responses were categorized using a weighted scale: Strongly Agree (SA), Agree (A), Disagree (D), and Strongly Disagree (SD). Two research assistants facilitated the data collection process, achieving a complete return rate of questionnaires from all participants. The collected data were analyzed using mean and standard deviation to address the research questions. Items with a mean score below 2.50 were rejected, while those with a mean score equal to or above 2.50 were accepted. The validity of the instrument was confirmed through expert review, and its reliability was established with a Cronbach’s alpha coefficient of 0.87. This methodology aims to provide a comprehensive understanding of lecturers’ perceptions regarding data security measures, contributing to the enhancement of data protection strategies in AI-enhanced educational environments in Nigerian universities.

Research questions

1. What is the level of awareness among lecturers regarding data security risks associated with AI-enhanced educational tools?

2. How effective do lecturers perceive current data security measures implemented in AI-enhanced educational tools?

3. What is the perceived impact of data privacy regulations (e.g., FERPA, GDPR) on the implementation of data security measures in AI-enhanced educational tools?

4. How do lecturers view the role of training and support in enhancing data security practices related to AI tools?

5. What are the lecturers’ perceptions of the gaps and challenges in ensuring data security in AI-enhanced educational tools in Nigerian universities?

Table 1: Mean and Standard Deviation of the Level of Awareness among Male and Female Lecturers regarding Data Security Risks Associated with AI-Enhanced Educational Tools
S/N Item Statement Male Lecturers
Male Lecturers
SD
Male Lecturers
REM
Female Lecturers
Female Lecturers
SD
Female Lecturers
REM
1 I am aware of the potential data security risks associated with AI-enhanced educational tools. 3.45 0.67 Accept 3.48 0.66 Accept
2 I understand the importance of protecting sensitive student data in AI applications. 3.15 0.62 Accept 3.10 0.53 Accept
3 I regularly update myself on data security issues related to AI in education. 3.03 0.53 Accept 3.13 0.59 Accept
4 I believe that data security risks are a significant concern in my institution. 3.51 0.68 Accept 3.61 0.66 Accept
5 The IT department in our university is responsive to data security concerns in AI-enhanced educational tools. 3.61 0.66 Accept 3.74 0.77 Accept
Sum Average 3.45 0.67 Accept 3.48 0.66 Accept

The mean average for male lecturers is 3.45, and for female lecturers is 3.48. The results indicate that both male and female lecturers have a relatively high level of awareness regarding data security risks associated with AI-enhanced educational tools. All items in this table have a mean score above 2.5, indicating acceptance. The slight difference in averages indicates a similar understanding across genders.

6 The data security measures in place at my university are effective in protecting sensitive information. 2.21 0.54 Reject 2.24 0.53 Reject 7 I feel confident in the security of AI-enhanced educational tools used in my department. 2.29 0.41 Reject 2.28 0.54 Reject 8 The existing security protocols adequately address potential vulnerabilities in AI systems. 3.22 0.45 Accept 3.51 0.61 Accept Sum Average 3.18 0.58  3.26 0.61 Table 2: Mean and standard deviation of male and female lecturers on how effective lecturers perceive current data security measures implemented in AI-enhanced educational tools

ConclusionMisconceptions in science education present significant barriers to effective learning and instruction. By understanding their origins and characteristics, educators can implement targeted strategies to promote conceptual change. Addressing these barriers requires a multifaceted approach involving improved instructional practices, teacher training, and the integration of innovative tools. In doing so, science education can more effectively foster a generation of critical thinkers equipped to navigate a scientifically complex world.
References

  1. Guerra-Reyes, F.; Basantes-Andrade, A.; Naranjo-Toro, M.; Guerra-Dávila, E. Didactic Models in Higher Education: From Professors’ Conceptions towards Didactic Ecologies. Form. Univ. 2022, 15, 11–22.
  2. Morales Jasso, G.; Rodríguez López, A.; Saury de la Garza, C. Clasificación de las ciencias y otras áreas del conocimiento, una problematización. IE Rev. Investig. Educ. REDIECH 2021, 12, e1354.
  3. Giordan, A.; de Vecchi, G. Guía Práctica Para La Enseñanza Científica; Diada: Sevilla, Spain, 2006; ISBN 84-87118-16-X.
  4. Pozo, J.I.; Flores, F. Cambio Conceptual y Representacional en el Aprendizaje y la Enseñanza de la Ciencia; A. Machado libros S. A.: Boadilla del Monte, Spain, 2007; ISBN 978-84-7774-152-7.
  5. Giordan, A. Los Orígenes Del Saber. De Las Concepciones Personales a Los Conceptos Científicos, 4th ed.; Narcea: Sevilla, Spain, 1997.
  6. Chavan, R.; Khandagale, V. Intricacies in Identification of Biological Misconceptions. Sch. Res. J. Interdiscip. Stud. 2022, 9, 16810–16819.
  7. Ausubel, D.; Novak, J.; Hanesian, H. Psicología Educatuva. Un Punto de Vista Cognoscitivo, 2nd ed.; Trillas: Ciudad de Mexico, Mexico, 2016; ISBN 978-968-24-1334-6.
  8. Giordan, A. Aprender; Instituto Superior de Formación Docente de la Plata: La Plata, Argentina, 2020.
  9. Osborne, R.; Freyberg, P. El Aprendizaje de Las Ciencias. Influencia de Las Ideas Previas de Los Alumnos, 3rd ed.; Diada: Madrid, Spain, 1998.
  10. Driver, R.; Squires, A.; Rushworth, P.; Wood-Robinson, V. Dando Sentido a La Ciencia En Secundaria. Investigaciones Sobre Las Ideas de Los Niños; Aprendizaje VISOR: Ciudad de Mexico, Mexico, 2000.
  11. Driver, R.; Guesne, E.; Tiberghien, A. Ideas Científicas En La Infancia y La Adolescencia, 4th ed.; Morata: Madrid, Spain, 2012.
  12. Moreno Marimón, M.; Sastre Vilarrasa, G. Por Qué Vemos Dinosaurios En Las Nubes. De Las Sensaciones a Los Modelos Organizadores Del Pensamiento; Gedisa: Barcelona, Spain, 2020; ISBN 978-84-18193-73-6.
  13. Chinn, C. A., & Malhotra, B. A. Epistemologically authentic inquiry in schools: A theoretical framework for evaluating inquiry tasks. Science Education, 2002, 86(2), 175–218.
  14. Sadler, P. M. Psychometric models of student conceptions in science: Reconciling qualitative studies and distractor-driven assessment instruments. Journal of Research in Science Teaching, 1998, 35(3), 265–296.
  15. Chi, M. T. H. Two Kinds and Four Sub-types of Misconceived Knowledge, Ways to Change It, and the Learning Outcomes. In: International Handbook of Research on Conceptual Change, 2013.
  16. Vosniadou, S., & Brewer, W. F. Mental Models of the Earth: A Study of Conceptual Change in Childhood. Cognitive Psychology, 1992, 24(4), 535–585.
  17. Treagust, D. F., & Duit, R. Conceptual Change: A Discussion of Theoretical, Methodological, and Practical Challenges for Science Education. Cultural Studies of Science Education, 2008, 3(2), 297–328.
  18. DiSessa, A. A. A History of Conceptual Change Research: Threads and Fault Lines. In: The Cambridge Handbook of the Learning Sciences, 2006.
  19. Novak, J. D. Meaningful Learning: The Essential Factor for Conceptual Change in Limited or Inappropriate Propositional Hierarchies Leading to Empowerment of Learners. Science Education, 2002, 86(4), 548–571.
  20. Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. Accommodation of a Scientific Conception: Toward a Theory of Conceptual Change. Science Education, 1982, 66(2), 211–227.
  21. Chinn, C. A., & Brewer, W. F. The Role of Anomalous Data in Knowledge Acquisition: A Theoretical Framework and Implications for Science Instruction. Review of Educational Research, 1993, 63(1), 1–49.
  22. Treagust, D. F. Development and Use of Diagnostic Tests to Evaluate Students’ Misconceptions in Science. International Journal of Science Education, 1988, 10(2), 159–169.
  23. Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. Scaffolding and Achievement in Problem-Based and Inquiry Learning: A Response to Kirschner, Sweller, and Clark (2006). Educational Psychologist, 2007, 42(2), 99–107.
  24. Harrison, A. G., & Treagust, D. F. A Typology of School Science Models. International Journal of Science Education, 2000, 22(9), 1011–1026.
  25. Mazur, E. Peer Instruction: A User’s Manual. Prentice Hall, 1997.
  26. Hestenes, D., Wells, M., & Swackhamer, G. Force Concept Inventory. The Physics Teacher, 1992, 30(3), 141–158.
  27. Gregory, T. R. Understanding Natural Selection: Essential Concepts and Common Misconceptions. Evolution: Education and Outreach, 2009, 2(2), 156–175.
  28. Taber, K. S. Chemical Misconceptions – Prevention, Diagnosis, and Cure. Royal Society of Chemistry, 2002.
  29. Vosniadou, S. Capturing and modeling the process of conceptual change. Learning and Instruction, 1994, 4(1), 45–69.
  30. Driver, R., Squires, A., Rushworth, P., & Wood-Robinson, V. Making Sense of Secondary Science: Research into Children’s Ideas. Routledge, 1994.
  31. Sinatra, G. M., Brem, S. K., & Evans, E. M. The role of metacognition in learning and teaching in science education. The Nature of Science in Science Education, 2003, 95–120.
  32. Hennessy, S., Deaney, R., & Ruthven, K. Emerging teacher strategies for supporting collaborative use of technology in science classrooms. International Journal of Science Education, 2007, 29(5), 571–597.
  33. Hewson, P. W., & Hewson, M. G. A. Effect of instruction using students’ prior knowledge and conceptual change strategies. Journal of Research in Science Teaching, 1983, 20(8), 731–743.
  34. Kumar, S. An analysis of common misconceptions in chemistry education and practices. International Journal of Applied and Behavioral Sciences, 2024, 01(01), 01–11. https://doi.org/10.70388/ijabs24701
  35. Kumar, S. Effect of Concept Based Cartoons as art integration on Alternative Concepts in Chemical Bonding. Shodh Sari-An International Multidisciplinary Journal, 2024, 03(03), 286–302. https://doi.org/10.59231/sari7735