Category: Reflections

This is the category to apply to your Weekly Reflection posts from the course.

Weekly Reflection Post 6

Reflection on Week 11: Critical Thinking, Equity in Digital Spaces, and Ergonomics

This week we looked at equity in digital spaces, critical thinking, and ergonomics. When looking at equity in digital spaces, Inequity is seen in the through things like, limited internet access and under representation or misrepresentation of certain cultures. Artificial intelligence has perpetuated these problems and users of digital systems, especially artificial intelligence, need to be critical when using and thinking about these technologies.

To get a deeper look into equity and AI, I looked for some articles, especially around how AI contributes to digital inequities, and came across this one:

A point that stuck with me from the article was about sycophantic deception. “Sycophants are individuals who use deceptive tactics to gain the approval of powerful figures. They engage in flattery and avoid disagreeing with authority figures. Their primary aim is to gain favor and influence, often at the expense of the long-term goals of the person they are flattering” (Park et al., 2024)

““sycophantic deception” is an emerging concern in LLMs. Chatbots have been observed to systematically agree with their conversation partners, regardless of the accuracy of their statements. When faced with ethically complex inquiries, LLMs tend to mirror the user’s stance, even if it means forgoing the presentation of an impartial or balanced viewpoint” (Park et al., 2024).

This is concerning and may cause more inequity because if someone has an extremely biased one sided belief that is wrong, but ChatGPT tells them they are right, then that persons belief is being confirmed and solidified, regardless of the truth. A lot of the time misinformation is spread about marginalized groups and if ChatGPT is confirming this information, it grows the divide between groups in society.

References

Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5), 100988. https://doi.org/10.1016/j.patter.2024.100988

Weekly Reflection Post 5

Reflection on Week 10: Surveillance, More on Accessibility, Indigenous Digital Literacies, and more

For this week’s reflection I wanted to focus on Indigenous perspectives on Digital Literacy, especially because I had an Indigenous Mental Health course last semester.

Indigenous culture is all about community and spirituality. Indigenous peoples beliefs and values can differ from those that are not indigenous. As well lots of indigenous people live in remote communities and may not have access to high-speed internet. Another big issue is with generative AI misrepresenting indigenous knowledge by representing Indigenous knowledge through one overall combination of ideas. This is problematic because there is a wide range and variety of different Indigenous groups who may not follow the exact same beliefs or practices.

To show how organising all indigenous beliefs/practices into one whole concept is problematic, I found an article that shows exactly why combining groups and communities into one, can have negative outcomes.

In this article, we see a story where a group of students created a graphic novel exploring MĂ©tis identity. The problem that arose was with Western MĂ©tis communities were arguing that simply having mixed ancestry does not make someone MĂ©tis, and challenged the idea of the “Eastern MĂ©tis”. These Western MĂ©tis communities come from and belong to the red river region, which has a completely different culture, history, and even language, then those that call themselves Eastern MĂ©tis. The author of the article says that “While the French term mĂ©tis initially referred to those with mixed European and First Nations ancestry, the term has come to refer to descendants of a specific group in western Canada’s Red River region”. This article shows the importance of not misrepresenting indigenous groups and reflects why artificial intelligence systems should not group all indigenous peoples into one combined group. Given that there are over 600 indigenous communities in Canada, it is crucial for AI systems to address this issue.

References

https://www.theguardian.com/world/article/2024/may/19/graphic-novel-canada-indigenous-identity-?utm_source=chatgpt.com

Weekly Reflection Post 4

Reflection on week 9: Data Ownership, Datafication, & Cybersecurity, Intro to Accessibility, and more

This week we looked at how technology is reducing human complexity and how education systems/society are being shaped by datafication and algorithms.

Also, on the topic of digital literacy, we looked at the importance of cyber security and saw how cyber crime has grown and is becoming very profitable to hackers, especially now that they are using AI to assist their hacking.

We also talked about adaptive and assistive technologies and how we need systems and technologies that are inclusive and accessible for all users

To further look at accessibility I used the Web Accessibility Evaluation Tool (WAVE) to see how my first blog reflection post is evaluated.

WAVE evaluation of my 2nd reflection page

The WAVE test on my webpage was very helpful and provided good information on how I can make my post more accesible, for example adding alternative text, which describe the image.

Weekly Reflection Post 3

Reflection on week 6: Artificial Intelligence, Academic Integrity, and more

This week we took a deeper look at AI…

Lucas Wright spoke on the widely used fast growing use of AI in the education system in Canada. He says how teachers, and students need to adopt Critical Generative AI Literacy. To do so, an AI user needs understand how the tools work and be aware of the implications of AI for society and the environment.

A big part of Lucas talk that stood out to me is how AI is advanced technology but has had flaws when it comes to accuracy

To explore this I test the accuracy of the following three prompts to AI:

Prompt: “Generate an image that looks like it was drawn by a kindergarten student of a green flamingo wearing a baseball cap”

Output:

Accuracy rating: 10/10

Prompt: “Act as a principal that has just found me stealing sugar packets from the cafeteria, you are a laid back chill principal and I am one of your favourite students”

Output:

[Leaning against the doorframe, arms crossed, half a smile]

Alright, alright, help me out here for a second
 I walk into the cafeteria, minding my own business, and what do I see? One of my star students—someone I genuinely like—stuffing sugar packets into their hoodie like we’re running a black-market bakery

Accuracy rating: 10/10

Both the first prompts were very accurate so I wanted to try a more detail image to see if it would miss anything…

Prompt:

“Generate an image of three friends at a Japanese hotspring. The first friend is a chubby walrus wearing a green and blue polka dot bathing suit. The second friend is a hipster camel wearing a chain and white tank top. The third friend is a skinny deer wearing black glasses drinking a soda. Have nice polished wood in the background, as well as the hotsprings”

Output:

Accuracy: 9/10

Here we can see ChatGPT correctly creating my prompt but missing some accuracy where the walrus bathing suit does not really have green and I never said to give the camel glasses.

Weekly Reflection Post 2

Reflection On Week 5: Intellectual Property, Copyright, Open Licensing, and more

This week we looked at things such as open education, copyright, licensing, and artificial intelligence.

My experiences with copyright:

As a University student I have used a lot of other peoples information when doing things like using theories or frameworks to draw conclusions from whatever I am studying. For example, I frequently use and cite journal articles to provide support and evidence behind the things that I say in my assignments or projects.

When it comes to ownership of material, I think of a time when I wish I had kept full ownership of one of my videos. I had made a viral video and many companies had reached out to repost it and give me credit so I gave permission to some of them. One of those companies was barstool sports and fast forward a year I have a company reaching out to me offering $1000 USD to use my video in a commerical. I was super excited because I had only received a tag as credit from previous companies. I went back and read the agreements with each company and when it came to barstool sports it turned out that If I gave the video out to this company to use in their commercial, I could face legal problems. I tried reaching out to them and was ignored everytime and ultimately was not able to say that this new company could use my video. I lost out on $1000 USD and this experience opened my eyes to the importance of giving credit but also keeping ownership and understanding every aspect of any agreements with other parties.

AI and Open Education

AI content is seen as open because it cannot be copyrighted and is seen as being in the public domain. When you input something that is copyrighted into AI and have it change it, it is considered a derivative work and needs copyright licensing. When looking at my experiences with AI and creation, I have only ever used it in a way like David Wiley said calling AI a “more knowledgeable other” that can assist your learning. I have used AI to elaborate on ideas at a range and speed that enhances my learning and creations significantly.

I wanted to try making a derivative work with AI and here is how it went:

Starting Image

I chose an artwork called “The Blue Boy” by Thomas Gainsborough and will be using CHATGPT 4.0 to transform it into an anime style image that is similar to the famous “Studio Ghibli” style.

The Blue Boy by Thomas Gainsborough

Prompt: “Make Ghibli Effect”

Output:

The Blue Boy by Thomas Gainsborough in Ghibli Style generated by ChatGPT 4.0

The AI worked perfectly and if I were to publish this it is derivative work and I would have to credit Thomas Gainsborough.

One other thing to add is that there has been a lot of fuss around if using ChatGPT 4.0 to turn your images into Ghibli Style images is copyright infringement in itself as you are technically taking Studio Ghibli’s style of animation.

Weekly Reflection Post 1

Reflection on Week 4: Digital Literacy Frameworks

This week we looked at Digital Literacy and the importance of being able to appropriately access, analyze, and construct knowledge from digital information.

Through my exploration of The B.C. Post-Secondary Digital Literacy Framework there were two things that I chose to take a deeper look into…

First, something that stuck with me from the digital literacy frame work was this passage:

” A person’s access to adequate hardware and software is required for developing digital literacy. However, not all people are in B.C. have access to hardware and software, nor are included in digital or online environments” (Sanders & Scanlon, 2021).

This passage is important because it shows barriers with rising technology and abilities to access it. I wanted to explore this on a bigger scale and to get a deeper look at this I found an article that explores these barriers.

The article speaks on how these barriers to digital hardware are problematic and can even be seen as a human rights issue. “Millions of people in the USA still have no home access to high-speed Internet” and “Low-income, people of color, older, Native Americans, and rural residents” are especially affected by the divide (Sanders & Scanlon, 2021).

After reading the article, I see that a big factor contributing to the divide is the inability to access or afford broadband highspeed internet. This can be because of problems such as financial, educational, technological or being in a rural or marginalised community.

We can see this digital divide, perpetuating “social, economic, and political disparities” .

Second, a part of the digital literacy framework that I believe is very important is around information literacy and understanding that information can be false and biased. It is clear that many companies and publishers spread misinformation and I want to know what leads people to believing such information. This article below had significant findings:

The study found that “users mostly tend to select and share content related to a specific narrative and to ignore the rest”. Also, that users are more likely to consume information that comes from, or is presented to, like-minded people of which are connected in “homogeneous, polarized clusters”, that share similar views (Del Vicario et al., 2016).

This shows that people are likely to believe information that follows what they already believe and what is similar to those that they relate themselves to. We see that digital literacy is affected by the digital divide and inequalities to accessing and adopting technologies. For those who have access, it is critical to acknowledges ones own biases but also the potential biases within the information they are reading.

References

Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554–559. https://doi.org/10.1073/pnas.1517441113

Sanders, C. K., & Scanlon, E. (2021). The digital divide is a human rights issue: Advancing social inclusion through social work advocacy. Journal of Human Rights and Social Work, 6(2), 130–143. https://doi.org/10.1007/s41134-020-00147-9