Reflection on Week 11: Critical Thinking, Equity in Digital Spaces, and Ergonomics
This week we looked at equity in digital spaces, critical thinking, and ergonomics. When looking at equity in digital spaces, Inequity is seen in the through things like, limited internet access and under representation or misrepresentation of certain cultures. Artificial intelligence has perpetuated these problems and users of digital systems, especially artificial intelligence, need to be critical when using and thinking about these technologies.
To get a deeper look into equity and AI, I looked for some articles, especially around how AI contributes to digital inequities, and came across this one:
A point that stuck with me from the article was about sycophantic deception. “Sycophants are individuals who use deceptive tactics to gain the approval of powerful figures. They engage in flattery and avoid disagreeing with authority figures. Their primary aim is to gain favor and influence, often at the expense of the long-term goals of the person they are flattering” (Park et al., 2024)
“âsycophantic deceptionâ is an emerging concern in LLMs. Chatbots have been observed to systematically agree with their conversation partners, regardless of the accuracy of their statements. When faced with ethically complex inquiries, LLMs tend to mirror the userâs stance, even if it means forgoing the presentation of an impartial or balanced viewpoint” (Park et al., 2024).
This is concerning and may cause more inequity because if someone has an extremely biased one sided belief that is wrong, but ChatGPT tells them they are right, then that persons belief is being confirmed and solidified, regardless of the truth. A lot of the time misinformation is spread about marginalized groups and if ChatGPT is confirming this information, it grows the divide between groups in society.
References
Park, P. S., Goldstein, S., OâGara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5), 100988. https://doi.org/10.1016/j.patter.2024.100988
Reflection on Week 10: Surveillance, More on Accessibility, Indigenous Digital Literacies, and more
For this week’s reflection I wanted to focus on Indigenous perspectives on Digital Literacy, especially because I had an Indigenous Mental Health course last semester.
Indigenous culture is all about community and spirituality. Indigenous peoples beliefs and values can differ from those that are not indigenous. As well lots of indigenous people live in remote communities and may not have access to high-speed internet. Another big issue is with generative AI misrepresenting indigenous knowledge by representing Indigenous knowledge through one overall combination of ideas. This is problematic because there is a wide range and variety of different Indigenous groups who may not follow the exact same beliefs or practices.
To show how organising all indigenous beliefs/practices into one whole concept is problematic, I found an article that shows exactly why combining groups and communities into one, can have negative outcomes.
AI (Artificial Intelligence) is technology that helps computers think and learn like humans. In schools, AI is used in many ways, like helping students with homework, grading tests, and even tutoring. Some schools use AI chatbots that answer student questions, while others use AI programs to help with writing and math. AI can make learning faster and easier, but some people worry that students might rely on it too much instead of thinking for themselves.
Is AI Good or Bad for Learning?
AI in education has both good and bad sides. One good thing is that AI can help students learn at their own pace. If a student doesnât understand something, AI can explain it again in a different way. AI can also save teachers time by grading work quickly. However, some people worry that students might use AI to cheat instead of learning. Another problem is privacyâsome AI tools collect student data, and people wonder if that is safe. AI should be used carefully so that it helps students learn, not replace real learning.
How is AI Used in Schools?
Our Experience with AI in Education
Tabarek and I have explored AI use in education to find if it is helpful or harmful for students. While AI can explain things in detail, give you quick answers, or even write a test, it can be also be problematic. AI can lead students to teach, it can give false answers, and students are losing a key aspect of the learning process. Also, AI collects data on students, which is a big privacy concern.
We have been learning about how AI is used in schools and whether it is helpful or harmful for students. AI can be really useful â it can explain things in different ways, grade tests quickly, and even act like a tutor. But there are also some problems. Some students might use AI to cheat, and teachers worry that students wonât think for themselves. Another big concern is privacy because AI collects student data. Â
In our research we saw an article by Clugston (2024) explains that AI in education has both good and bad sides. On the positive side, AI helps students by personalizing learning, meaning it can adjust lessons to match each studentâs needs. This can make learning easier and more engaging. AI also helps teachers by automating tasks like grading, so they have more time to teach. However, there are concerns too. AI collects a lot of student data, which raises privacy issues. Also, if schools rely too much on AI, students might miss out on real human interaction, which is important for building social and thinking skills.
Also, we found that many teachers are unsure about using AI in schools. A survey by the Pew Research Center (2024) showed that 25% of public K-12 teachers think AI tools do more harm than good in education, while only 6% believe they do more good than harm. Additionally, 32% feel thereâs an equal mix of benefits and drawbacks, and 35% are uncertain. This uncertainty suggests that while AI has potential, educators are cautious about its role in teaching.
Matthews (2024) talks about how AI can be both helpful and harmful in schools. He says AI can help students learn by giving quick feedback and explaining things in different ways. But, he also warns that students might start using AI to do their work for them instead of thinking for themselves. He believes that AI should be a learning tool, not a way to cheat. Schools need to teach students how to use AI the right way so it helps them learn and not just find quick answers.
At the end, this research opened our eyes to both the good and bad sides of AI in education. AI can make learning easier and more personal, but it also brings challenges like cheating and privacy issues. Some teachers are excited about AI, while others arenât sure if itâs a good idea. We learned that AI isnât just good or bad â it depends on how we use it. Schools need to find the right balance so AI helps students without replacing real learning.
In Unveiling the shadows: Beyond the hype of AI in education, Al-Zahrani (2024) provides the findings of a large study on AI use in education. Here are the critical findings that the study found and what we think may be potential repercussions that could follow:
Loss of human connection:
âAIâs impact on personal ties between students and educators is cause for worryâ because it may lead to students feeling a reduction in their sense of support and emotional connections. It also reduces personalization and individual attention in the learning experience.
Potential Repercussions: Personalization is all about individuality and having a specific way that you learn individually. A lack of this may lead to the learner missing out on key parts on finding their identity and building skills. If the learner is always using ChatGPTâs answers and using AI as a basis of completing work, how are they going to learn what works for them when on their own. It is important for a student to know their strengths, for example, problem solving. Outside of school when there are real world problems infront of them, ChatGPT will not know everything about the situation. For example if the student was at work, and they had to resolve a conflict between two coworkers, they may be better equipped to solve the issue if they had grown their problem solving skills. In this situation they could not just add the problem into ChatGPT without giving it extreme detail and context, which even at that point would still not be the same as if the student had the skill themselves.
Reduced critical thinking and creativity:
AI systems provide predefined answers which can prevent students from engaging in critical analysis or creative expression. This can reduce innovation and original thought.
Potential Repercussions: By using AI students may be missing out on building and strengthening key skills with creativity and critical analysis. This can lead to a lack of variety and fun in their future as they may have a reduced ability to find happiness in things. It also will affect their ability to problem solve and draw conclusions from things if they canât find key pieces of information that they had previously been relying on ChatGPT for.
Unequal access and technological divide:
AI poses the risk of widening gaps in education opportunities because of uneven access to advanced technologies.
Potential Repercussions: This unequal access can enhance disparities for different demographics as geographical location and socioeconomic status play a role in the ability to access artificial intelligence technologies. This can further expand the gap between the rich and poor, and the privileged and non privileged people/societies.
Teacher professional development and role:
Ongoing training and development for teachers is suggested to help adapt and skillfully integrate AI into their teaching. This can help keep the critical role that teachers play in steering student learning.
Potential Repercussions: Teachers have their own journey in the education system as well. With the fast and rapidly growing implementation of AI into the school system, there is a lot of adjusting and learning that has to be done by the teachers themselves. If teachers are successfully trained and informed on these technologies, and can implement them effectively, they can find a way to continue to be involved in the learning process, while still using artificial intelligence. For example, if a teacher includes a part of an assignment to be completed by the use of AI, then they will be involved and aware of its use and therefore involved in the learning process.
References
Al-Zahrani, A. M. (2024). Unveiling the shadows: Beyond the hype of AI in Education. Heliyon, 10(9). https://doi.org/10.1016/j.heliyon.2024.e30696
Clugston, B. (2024, July 19). Advantages and disadvantages of AI in education. University Canada West.
In this stage of our inquiry on the use of artificial intelligence (AI) in education, Tabarek and I looked deeper into the results of a study on AI use in the schooling system. In inquiry 1 we briefly looked at an article, Unveiling the Shadows Beyond the hype of AI in Education, which presents the findings of a study that clearly shows a range of negative outcomes with AI and education. We will be providing the findings of the study along with our personal assessment of possible repercussions of each situation.
In Unveiling the shadows: Beyond the hype of AI in education, Al-Zahrani (2024) provides the findings of a large study on AI use in education. Here are the critical findings that the study found and what we think may be potential repercussions that could follow:
Loss of human connection:
âAIâs impact on personal ties between students and educators is cause for worryâ because it may lead to students feeling a reduction in their sense of support and emotional connections. It also reduces personalization and individual attention in the learning experience.
Potential Repercussions: Personalization is all about individuality and having a specific way that you learn individually. A lack of this may lead to the learner missing out on key parts on finding their identity and building skills. If the learner is always using ChatGPT’s answers and using AI as a basis of completing work, how are they going to learn what works for them when on their own. It is important for a student to know their strengths, for example, problem solving. Outside of school when there are real world problems infront of them, ChatGPT will not know everything about the situation. For example if the student was at work, and they had to resolve a conflict between two coworkers, they may be better equipped to solve the issue if they had grown their problem solving skills. In this situation they could not just add the problem into ChatGPT without giving it extreme detail and context, which even at that point would still not be the same as if the student had the skill themselves.
Reduced critical thinking and creativity:
AI systems provide predefined answers which can prevent students from engaging in critical analysis or creative expression. This can reduce innovation and original thought.
Potential Repercussions:Â By using AI students may be missing out on building and strengthening key skills with creativity and critical analysis. This can lead to a lack of variety and fun in their future as they may have a reduced ability to find happiness in things. It also will affect their ability to problem solve and draw conclusions from things if they can’t find key pieces of information that they had previously been relying on ChatGPT for.
Unequal access and technological divide:
AI poses the risk of widening gaps in education opportunities because of uneven access to advanced technologies.
Potential Repercussions: This unequal access can enhance disparities for different demographics as geographical location and socioeconomic status play a role in the ability to access artificial intelligence technologies. This can further expand the gap between the rich and poor, and the privileged and non privileged people/societies.
Teacher professional development and role:
Ongoing training and development for teachers is suggested to help adapt and skillfully integrate AI into their teaching. This can help keep the critical role that teachers play in steering student learning.
Potential Repercussions: Teachers have their own journey in the education system as well. With the fast and rapidly growing implementation of AI into the school system, there is a lot of adjusting and learning that has to be done by the teachers themselves. If teachers are successfully trained and informed on these technologies, and can implement them effectively, they can find a way to continue to be involved in the learning process, while still using artificial intelligence. For example, if a teacher includes a part of an assignment to be completed by the use of AI, then they will be involved and aware of its use and therefore involved in the learning process.
References
Al-Zahrani, A. M. (2024). Unveiling the shadows: Beyond the hype of AI in Education. Heliyon, 10(9). https://doi.org/10.1016/j.heliyon.2024.e30696
Tabarek and I have continued our exploration of AI use in education to find if it is helpful or harmful for students. While AI can explain things in detail, give you quick answers, or even write a test, it can be also be problematic. AI can lead students to teach, it can give false answers, and students are losing a key aspect of the learning process. Also, AI collects data on students, which is a big privacy concern.
In our research we saw an article by Clugston (2024) explains that AI in education has both good and bad sides. On the positive side, AI helps students by personalizing learning, meaning it can adjust lessons to match each studentâs needs. This can make learning easier and more engaging. AI also helps teachers by automating tasks like grading, so they have more time to teach. However, there are concerns too. AI collects a lot of student data, which raises privacy issues. Also, if schools rely too much on AI, students might miss out on real human interaction, which is important for building social and thinking skills.
Also, we found that many teachers are unsure about using AI in schools. A survey by the Pew Research Center (2024) showed that 25% of public K-12 teachers think AI tools do more harm than good in education, while only 6% believe they do more good than harm. Additionally, 32% feel thereâs an equal mix of benefits and drawbacks, and 35% are uncertain. This uncertainty suggests that while AI has potential, educators are cautious about its role in teaching.
Matthews (2024) talks about how AI can be both helpful and harmful in schools. He says AI can help students learn by giving quick feedback and explaining things in different ways. But, he also warns that students might start using AI to do their work for them instead of thinking for themselves. He believes that AI should be a learning tool, not a way to cheat. Schools need to teach students how to use AI the right way so it helps them learn and not just find quick answers.
This research has opened our eyes to both the good and bad sides of AI in education. AI can make learning easier and more personal, but it also brings challenges like cheating and privacy issues. Some teachers are excited about AI, while others arenât sure if itâs a good idea. We learned that AI isnât just good or bad â it depends on how we use it. Schools need to find the right balance so AI helps students without replacing real learning.Â
References
Clugston, B. (2024, July 19). Advantages and disadvantages of AI in education. University Canada West.
LUONA, L. (2024, May 15).A quarter of U.S. teachers say AI tools do more harm than good in K-12 education. Pew. Research Center.
Reflection on week 9: Data Ownership, Datafication, & Cybersecurity, Intro to Accessibility, and more
This week we looked at how technology is reducing human complexity and how education systems/society are being shaped by datafication and algorithms.
Also, on the topic of digital literacy, we looked at the importance of cyber security and saw how cyber crime has grown and is becoming very profitable to hackers, especially now that they are using AI to assist their hacking.
We also talked about adaptive and assistive technologies and how we need systems and technologies that are inclusive and accessible for all users
To further look at accessibility I used the Web Accessibility Evaluation Tool (WAVE) to see how my first blog reflection post is evaluated.
The WAVE test on my webpage was very helpful and provided good information on how I can make my post more accesible, for example adding alternative text, which describe the image.
Reflection on week 6: Artificial Intelligence, Academic Integrity, and more
This week we took a deeper look at AI…
Lucas Wright spoke on the widely used fast growing use of AI in the education system in Canada. He says how teachers, and students need to adopt Critical Generative AI Literacy. To do so, an AI user needs understand how the tools work and be aware of the implications of AI for society and the environment.
A big part of Lucas talk that stood out to me is how AI is advanced technology but has had flaws when it comes to accuracy
To explore this I test the accuracy of the following three prompts to AI:
Prompt: “Generate an image that looks like it was drawn by a kindergarten student of a green flamingo wearing a baseball cap”
Output:
Accuracy rating: 10/10
Prompt: “Act as a principal that has just found me stealing sugar packets from the cafeteria, you are a laid back chill principal and I am one of your favourite students”
Output:
[Leaning against the doorframe, arms crossed, half a smile]
Alright, alright, help me out here for a second⊠I walk into the cafeteria, minding my own business, and what do I see? One of my star studentsâsomeone I genuinely likeâstuffing sugar packets into their hoodie like weâre running a black-market bakery
Accuracy rating: 10/10
Both the first prompts were very accurate so I wanted to try a more detail image to see if it would miss anything…
Prompt:
“Generate an image of three friends at a Japanese hotspring. The first friend is a chubby walrus wearing a green and blue polka dot bathing suit. The second friend is a hipster camel wearing a chain and white tank top. The third friend is a skinny deer wearing black glasses drinking a soda. Have nice polished wood in the background, as well as the hotsprings”
Output:
Accuracy: 9/10
Here we can see ChatGPT correctly creating my prompt but missing some accuracy where the walrus bathing suit does not really have green and I never said to give the camel glasses.
Reflection On Week 5: Intellectual Property, Copyright, Open Licensing, and more
This week we looked at things such as open education, copyright, licensing, and artificial intelligence.
My experiences with copyright:
As a University student I have used a lot of other peoples information when doing things like using theories or frameworks to draw conclusions from whatever I am studying. For example, I frequently use and cite journal articles to provide support and evidence behind the things that I say in my assignments or projects.
When it comes to ownership of material, I think of a time when I wish I had kept full ownership of one of my videos. I had made a viral video and many companies had reached out to repost it and give me credit so I gave permission to some of them. One of those companies was barstool sports and fast forward a year I have a company reaching out to me offering $1000 USD to use my video in a commerical. I was super excited because I had only received a tag as credit from previous companies. I went back and read the agreements with each company and when it came to barstool sports it turned out that If I gave the video out to this company to use in their commercial, I could face legal problems. I tried reaching out to them and was ignored everytime and ultimately was not able to say that this new company could use my video. I lost out on $1000 USD and this experience opened my eyes to the importance of giving credit but also keeping ownership and understanding every aspect of any agreements with other parties.
AI and Open Education
AI content is seen as open because it cannot be copyrighted and is seen as being in the public domain. When you input something that is copyrighted into AI and have it change it, it is considered a derivative work and needs copyright licensing. When looking at my experiences with AI and creation, I have only ever used it in a way like David Wiley said calling AI a “more knowledgeable other” that can assist your learning. I have used AI to elaborate on ideas at a range and speed that enhances my learning and creations significantly.
I wanted to try making a derivative work with AI and here is how it went:
Starting Image
I chose an artwork called “The Blue Boy” by Thomas Gainsborough and will be using CHATGPT 4.0 to transform it into an anime style image that is similar to the famous “Studio Ghibli” style.
Prompt: “Make Ghibli Effect”
Output:
The AI worked perfectly and if I were to publish this it is derivative work and I would have to credit Thomas Gainsborough.
One other thing to add is that there has been a lot of fuss around if using ChatGPT 4.0 to turn your images into Ghibli Style images is copyright infringement in itself as you are technically taking Studio Ghibli’s style of animation.
This week we looked at Digital Literacy and the importance of being able to appropriately access, analyze, and construct knowledge from digital information.
Through my exploration of The B.C. Post-Secondary Digital Literacy Framework there were two things that I chose to take a deeper look into…
First, something that stuck with me from the digital literacy frame work was this passage:
” A personâs access to adequate hardware and software is required for developing digital literacy. However, not all people are in B.C. have access to hardware and software, nor are included in digital or online environments” (Sanders & Scanlon, 2021).
This passage is important because it shows barriers with rising technology and abilities to access it. I wanted to explore this on a bigger scale and to get a deeper look at this I found an article that explores these barriers.
The article speaks on how these barriers to digital hardware are problematic and can even be seen as a human rights issue. “Millions of people in the USA still have no home access to high-speed Internet” and “Low-income, people of color, older, Native Americans, and rural residents” are especially affected by the divide (Sanders & Scanlon, 2021).
After reading the article, I see that a big factor contributing to the divide is the inability to access or afford broadband highspeed internet. This can be because of problems such as financial, educational, technological or being in a rural or marginalised community.
We can see this digital divide, perpetuating “social, economic, and political disparities” .
Second, a part of the digital literacy framework that I believe is very important is around information literacy and understanding that information can be false and biased. It is clear that many companies and publishers spread misinformation and I want to know what leads people to believing such information. This article below had significant findings:
The study found that “users mostly tend to select and share content related to a specific narrative and to ignore the rest”. Also, that users are more likely to consume information that comes from, or is presented to, like-minded people of which are connected in “homogeneous, polarized clusters”, that share similar views (Del Vicario et al., 2016).
This shows that people are likely to believe information that follows what they already believe and what is similar to those that they relate themselves to. We see that digital literacy is affected by the digital divide and inequalities to accessing and adopting technologies. For those who have access, it is critical to acknowledges ones own biases but also the potential biases within the information they are reading.
References
Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554â559. https://doi.org/10.1073/pnas.1517441113
Sanders, C. K., & Scanlon, E. (2021). The digital divide is a human rights issue: Advancing social inclusion through social work advocacy. Journal of Human Rights and Social Work, 6(2), 130â143. https://doi.org/10.1007/s41134-020-00147-9
Tabarak and I are inquiring about the use of AI in education for our project. Through the exploration of article and videos we found that AI can help some students in school but can also bring multiple issues around cheating and copyright. Students have used AI for a number of creative endeavors, but also for things such as studying, which is concerning as AI has been proven to give false answers at times. Also, with AI there is a big issue with Privacy because it collects student data. While we think AI is useful in the right circumstances, it can also be harmful and students, as well as teachers, need to be careful when implementing it into their education.
Step 1: Starting
Think about the big ethical questions around AI in education.
Some initial questions: o Does AI encourage cheating in school? o How do we use AI in a way thatâs fair and responsible?
Step 2: Deepening
Do some research using online articles, reports, and studies.
Look into examples where AI has changed education.
Step 3: Refining
Update the research questions based on what youâve learned.
Make a simple mind map connecting the pros and cons of AI in schools.
Think about new things, like privacy concerns in AI tools.
Step 4: Planning
Decide how to share the findings (paper, presentation, blog, etc.).
Plan activities to gather more info: o Ask teachers or students about their experiences with AI. o Check out surveys on AI in education.
Look into stats on how often AI is used in schools.
Set up a simple timeline to keep things on track.
Step 5: Learning
Think about the ethical challenges that come up.
Summarize the main points into a final project.
Resources
“The integration of AI technologies in education may lead to a reduced sense of support and understanding among students due to the absence of human educators. Furthermore, AI-mediated learning could compromise emotional connections and empathy within the educational experience” (Al-Zahrani, 2024).